Currently we encrypt secrets and store them in Git then use a helm secrets wrapper when installing / upgrading environments.
This takes a bit of setup to do with keys, encryption etc. We should also have some nice integration with Vault.
Here's some background https://medium.com/ww-engineering/working-with-vault-secrets-on-kubernetes-fde381137d88
This looks handy too https://github.com/kelseyhightower/vault-on-google-kubernetes-engine
@sanketjpatel yeah, we're using the helm secrets plugin to host our production installation of Jenkins X which does all the CI/CD for the Jenkins X repos - then we checkin the sealed secrets into git
https://github.com/jenkins-x/cloud-environments/blob/master/env-jx-infra/secrets.yaml
Install the coreos Vault operator chart together with with the Jenkins X platform. This
requires also the coreos etcd operator, because the vault operator is
configured to use etcd as backend (an etcd cluster with 3 nodes is created for
each vault). A service account and role is also required.
Create a new vault using the vault CRD
Initialize the Vault. This step returns:
vault init
vault operator unseal
vault auth-enable kubernetes
vault write auth/kubernetes/config kubernetes_host=<URL>
[email protected] token_reviewer_jwt=$SA_TOKEN
path "secret/jenkins-x/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
vault write sys/policy/jenkins-x-policy [email protected]
vault write auth/kubernetes/role/jenkins-x-role \
bound_service_account_names=$(SA_NAME) \
bound_service_account_namespaces=jx \
policies=jenkins-x-policy \
ttl=1h
vault write auth/kubernetes/login role=jenkins-x-role jwt=${SA_TOKEN}
Key Value
--- -----
token 74603479-607d-4ab8-a406-d0456d9f3d65
token_accessor 4893b0a1-f42a-bfd8-cd9c-c14b9bdb6095
token_duration 1h0m0s
token_renewable true
token_policies [default jenkins-x-policy]
token_meta_role "jenkins-x-role"
token_meta_service_account_name "default"
token_meta_service_account_namespace "default"
token_meta_service_account_secret_name "default-token-fndln"
token_meta_service_account_uid "aaf6c23c-b04a-11e7-9aea-0245c85cf1cc"
Where do we store the root token and seal keys? ~/.jx folder?
Do we want to use 1 etcd cluster per vault? It seems a bit heavy. Should we
consider other backends?
The vault must remain unseal. Do we need to build this into the operator,
but this requires the seal keys? Some people stores the root token and seal keys in a cloud KMS.
Maybe have a jx command to unseal the vault.
Do we want one vault per environment? Probably we need one service
account per environment, or we can make use of AppRole to define a policy per
environment (https://www.vaultproject.io/docs/auth/approle.html)
https://www.vaultproject.io/docs/auth/kubernetes.html
https://github.com/coreos/vault-operator/blob/master/doc/user/kubernetes-auth-backend.md
https://github.com/coreos/vault-operator/blob/master/doc/user/vault.md
https://www.vaultproject.io/docs/concepts/seal.html
@ccojocar that all sounds great to me! :)
My attempt at answers...
~/.jx for now until we figure out something better@jstrachan some additional findings:
$kubectl get vault
NAME AGE
jenkins-x-vault 2d
jenkins-x-vault2 7m
$kubectl get pods
NAME READY STATUS RESTARTS AGE
jenkins-x-vault-6c65d4d55c-2kdp6 2/2 Running 0 2d
jenkins-x-vault-6c65d4d55c-kg6mw 1/2 Running 0 2d
jenkins-x-vault-etcd-bgv4l94c42 1/1 Running 0 2d
jenkins-x-vault-etcd-nwhnppmgtk 1/1 Running 0 2d
jenkins-x-vault-etcd-z68kctsjfp 1/1 Running 0 2d
jenkins-x-vault2-859c6956f9-5gwd8 1/2 Running 0 7m
jenkins-x-vault2-859c6956f9-wksdv 1/2 Running 0 7m
jenkins-x-vault2-etcd-8fr889m628 1/1 Running 0 7m
jenkins-x-vault2-etcd-fdvwrqzwrn 1/1 Running 0 7m
jenkins-x-vault2-etcd-g4r9fxrr5s 1/1 Running 0 8m
vault-operator-6d4fd9499c-8rc9m 1/1 Running 0 3d
vault-operator-etcd-operator-etcd-backup-operator-5fcbd98d7b6rw 1/1 Running 0 3d
vault-operator-etcd-operator-etcd-operator-5bd448b847-r46xc 1/1 Running 0 3d
vault-operator-etcd-operator-etcd-restore-operator-c45865fcrcws 1/1 Running 0 3d
Also another issue is that the etcd cluster does not have any persistency.
https://github.com/coreos/etcd-operator/issues/1323
The secrets are stored by the etcd encrypted in memory, but if the cluster goes down all the data is lost.
I think, we should use a database as backend and share it across multiple vaults. The data is anyhow encrypted on the vault side. We can use a persistent volume for the database.
These databases supported by the Vault:
https://github.com/hashicorp/vault/tree/master/physical
We decided to use the cloud storage services as backend for Vault, and https://www.minio.io/ for a in-cluster solution.
In order to archive this, we need the following work items before doing the integration in jx:
Please see https://github.com/banzaicloud/bank-vaults for a Vault-Operator without etcd baked in, we prefer cloud storage services as well (however etcd is an option as well).
@bonifaido Thank you very much! This looks great, it exactly what we need. It seems that the secrets engines are configurable and also the root token and unseal keys are stored in cloud KMS services. I will give it a try very soon. Do you know if there any helm chart available? I haven't found one it in the repository.
@ccojocar great, we don't have a helm chart for this right now, but in the operator/deploy directory you can see the plain Kubernetes manifests how to deploy it. Creating a helm chart from it is in the pipe but I haven't had the time, but contributions are more than welcome!
@bonifaido Do you have some time to chat on Kubernetes community slack. I am trying to get bank-vaults running with GCS and I have some questions related to the GCP service account. I don't find how is injected into the vault container. Typically, the path to the service account file, is either provided in the vault config or GOOGLE_APPLICATION_CREDENTIALS environments variable.
I will try to make it running, but really appreciate if you can give me some hands up about this.
@ccojocar if you run on Google Cloud Kubernetes you don't have to inject the credentials to the container directly, you can start the cluster in a GCP service account which has the right privileges to read/write GCS and KMS. Please see: https://github.com/banzaicloud/bank-vaults#google-cloud
@bonifaido Thanks for info. I modified the code inject a dedicated SA with the permissions you listed. Also I did some other fixes to get it running only with GCS, but there is still an issue with the unsealer:
The first log message is:
time="2018-07-10T09:47:15Z" level=info msg="vault is already initialized
It seems that the vault is already initialized, therefore the root token cannot be stored in KMS.
time="2018-07-10T09:51:47Z" level=error msg="error unsealing vault: unable to get key 'vault-unseal-0': error getting object for key 'vault-unseal-0': storage: object doesn't exist"
Any idea who could initialise the vault, apart from unsealer container?
Secrets are stored now in vault by the jx install command when running on GCP.
jx upgrade command should also read the admin secrets from Vault.
It would be helpful to use secret references in the YAML value files provided to the helm charts. jx CLI can process these YAMLs before starting the chart installation, and fetch the secrets from vault.
I think this will improve the current GitOps workflow quite a bit. The new workflow will looks like this:
jenkins:
user: "vault:secret/admin/jenkins:username"
password: "vault:secret/admin/jenkins:password"
The format could be vault:<path>:<key>
@jstrachan @rawlingsj @garethjevans What's your opinion on this?
We inject secrets into Pods from Vault very similarly, but this happens in runtime with a mutating webhook, probably the code in this blog post could help you a bit: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/
Thanks @bonifaido! This looks definitely interesting. Agree that the secrets resolution can be deferred to the runtime/mutating webhook for many use cases, but there are a lot of existing helm charts which relies on k8s secrets, and expects to get the secret value injected over a helm value during installation.
I would image that we could support both approaches in future. Especially for pipeline secrets and new applications, I would go with mutating webhook because it seems more secure.
For recent Kubernetes versions using a csi-driver like this https://github.com/kubevault/csi-driver with custom volumes, it is most likely the way to go.
but there are a lot of existing helm charts which relies on k8s secrets, and expects to get the secret value injected over a helm value during installation.
The webhook supports that as well if you mean something like this here in the MySQL chart: https://github.com/helm/charts/blob/master/stable/mysql/templates/secrets.yaml#L14
Instead of the value, you should pass the vault:.... reference path here and the webhook takes care of the rest.
@bonifaido Is the webhook able to parse the secrets on the fly before they are mounted into a POD?
My understanding from the blog you mentioned was that your solution is only able to parse environment variables from deployments over an injected init container which provide the resolved environment variable with the secrets to the POD.
Yes, it is able to do that, this feature was added after the blog post was published, sorry for not mentioning that. Please see https://github.com/banzaicloud/bank-vaults/pull/327/files for the exact details.
I will have a look. Sounds interesting then! Thanks again!
Will this change bring improved vault support for AWS? Feel like AWS with gitops for the JX dev environment isn't supported yet due to vault.
Does this change include AKS to support vault? If there a workaround (i.e. to install Gitops then create vault ) to make Gitops option works with AKS?
Can anyone outline any history as to what is required to get vault integration working with AWS and Azure before I disappear in to the codebase to find out the hard way? Depending on complexity and effort I'm keen to contribute and get this working for AKS in the short-term. Also - are there other issues tracking these vault integrations that I may have missed?
For anyone else asking similar questions in the future around AWS - having looked through the code came across this PR merged on 26th March that suggests vault/AWS is a thing... #3277
@chrismellard BTW we're in the process of moving to jx boot as our strategic way to install/configure/upgrade/edit Jenkins X installation, storage, networking, ingress/TLS/DNS and whatnot so if you're interested in getting storage/vault working for AKS I'd maybe start there.
with boot you can specify the URLs of the storage locations for logs in your jx-requirements.yml. I've just polished the docs around that here: https://jenkins-x.io/getting-started/boot/#storage
I think if you specify the azblob://mybucket URLs for logs in your jx-requirements.yml file it might actually work on AKS :)
But it would be nice to add an AKS implementation of this interface: https://github.com/jenkins-x/jx/blob/36cec2384739d1e9f0116d51306841d838a7c3c8/pkg/cloud/factory/factory.go#L14 so that we can lazily create AKS buckets and verify they exist etc Usually if the user enables long term storage on GKE we let folks enter a bucket name to create and if not lazily create one etc. Would be nice to have a similar UX on AKS toto.
Awesome - Thanks @jstrachan - I'll start getting my head in to that space :)
@chrismellard I would love to help as well. Let鈥檚 divide the work and start
@helayoty pm me on kubernetes.slack.com
/lifecycle stale
@ccojocar marking this as stale as it doesn't seem to be worked on atm
This issue manly covered the initial integration of Vault in jx with GCP support. I am going to close it in favour of following more specialised issues:
AWS support: https://github.com/jenkins-x/jx/issues/5055
Azure support: https://github.com/jenkins-x/jx/issues/5057
Improve secrets rotation: https://github.com/jenkins-x/jx/issues/4967 and https://github.com/jenkins-x/jx/issues/5058
Most helpful comment
I will have a look. Sounds interesting then! Thanks again!