I just wanted to request a new feature to have a vault-operator to manage vault ha cluster on kubernetes.
@raoofm A Vault operator, with support for HA, is available through the Tectonic Vault Open Cloud service: https://coreos.com/open-cloud-services/#vault
@rezaloo thanks, yes I'm aware but I opened this issue for a non tectonic kubernetes user and I expect this to be a very common use case and would add much value to vault as well.
@jefferai Is this on the radar?
Nope.
+1 for this
Introducing Vault Operator Project. Blog: https://coreos.com/blog/introducing-vault-operator-project Github: https://github.com/coreos/vault-operator
I assume this can probably be closed out with the OSS release of the operator.
thanks @philips
@jefferai can this be reopened. vault-operator needs maintainers
https://github.com/coreos/vault-operator/issues/332
It seems like a very common use case inline with OSS kubernetes.
We have a Vault operator which is actively maintained and used in production by us and our customers. It's a very feature rich operator, but should you have any requirements let us know: https://github.com/banzaicloud/bank-vaults.
Besides other features, these are the ones which are missing from the CoreOS operator:
@raoofm @jefferai @mcwienczek @sidewinder12s
We are working on our own open source Vault operator https://github.com/kubevault
Please note that the Banzai Vault Operator is based on the new operator-framework and we have described it in numerous blog posts already:
thanks @matyix @tamalsaha @bonifaido
I'll take a look
(Personal opinions to follow). I think an operator is an overkill for what's required here. I recommend you take a look at sethvargo/vault-on-gke and kelseyhightower/vault-on-google-kubernetes-engine.
Both those setups give you:
The Vault servers are deployed as StatefulSets, which give you guaranteed (re)start ordering (important for upgrades) and consistent naming.
After talking with some folks on Google's security team, it's also generally a best practice to run Vault in a dedicated namespace or even a dedicated cluster. This negates most of the benefits of an operator, since you'd be accessing Vault simply by an IP address rather than through k8s internals anyway.
Once you're inside a container, you can use vault-kubernetes-authenticator as an initContainer to grab the initial Vault token for you.
In terms of @matyix's list of requirements:
- Automatic Vault initialization
- Automated unsealing
- Root Token and Unseal Keys encrypted and stored in cloud KMS systems (Azure Key Vault, AWS KMS, GCP KMS ,Alibaba KMS)
This is handled by vault-init specifically on GCP. You could fork and adapt it to your cloud provider of choice. Auto-init/unseal is also an enterprise feature you can pay for for added availability. The process for init and initial unseal is very straightforward, and the code is there in OSS.
- Also they can be stored in Kubernetes Secrets (however this is not supposed to be used in production, because the current limitations of Kubernetes Secrets, see this doc for more details)
There's a lot of reasons this isn't feasible at the moment, the biggest of which being Kubernetes secrets have no concept of expiration/renewal and Kubernetes secrets are accessible in plaintext to anyone with admin access to the etcd cluster
- Automated re/configuration of Vault based on a YAML/JSON file like: Auth backends, Secret backends, and policies
This would be outside of what I'd expect even an operator to do. Usually you'd have a central CI/CD server manage this along with capturing your config as code. If anything, this should be a _Vault_ feature, not a function of the operator.
- It is not tied to etcd at all, supports multiple storage backends (e.g. cloud provider storages)
It's generally _not_ recommended that you depend on Kubernetes' etcd for things. The things I linked above leverage Google Cloud Storage, but that could be swapped for Cloud Spanner, EC2, or any other configuration - just update the Vault configuration in the YAML file.
I would like to dismiss some concerns and defend the Operator concept a little bit here:
minikube, where no cloud features are available locally. As mentioned, not recommended for production use.About etcd: I think no Vault operator used Kubernetes's etcd storage ever for Vault's storage backend, the CoreOS and the Banzai Cloud vault-operator both provision an etcd instance on top of Kubernetes if needed. However we prefer Cloud Provider storage as well.
Just want to comment on this point: it's not just Kubernetes' etcd storage that is the problem, it's all etcd storage. The CoreOS team pledged to support and maintain the etcd storage mechanism, but after the RedHat acquisition that dried up, and currently bug/issue reports for the etcd storage mechanism do not get responses or get addressed. Any vault-operator implementation using etcd is not just unsupported by the Vault team but currently is unsupported by anyone at all.
/cc @xiang90 @hasbro17
@philips mentioned that etcd-operator will be maintained. https://github.com/coreos/etcd-operator/issues/1719#issuecomment-380943135
etcd-operator isn't the etcd storage backend for Vault.
@jefferai 馃槃 I agree, banzaicloud/bank-vaults use etcd-operator to provision etcd backend, so I mentioned
Hello - I'm going to close this issue for now, thank you for participating in the discussion. HashiCorp has recently released a Helm chart for Vault, you can find more details on that here:
Thanks!
Most helpful comment
+1 for this