Helmfile: Can helmfile be used to sync config to multiple clusters in one run?

Created on 26 Jun 2019  路  6Comments  路  Source: roboll/helmfile

First off let me say I really appreciate this project && all the contributors and maintainers who make it possible. Using helmfile has been a noticeable improvement over using "raw" helm in conjunction with Makefiles and bash to manage chart lifecycles on clusters. Thanks a lot for making the operationalization of my team's clusters more declarative and reproducible. Alright, with that pandering out of the way 馃槈 --


Is it possible to have multiple helmfile definitions be applied against multiple/different clusters during a helmfile sync run?

For example, when I'm operating on a single cluster, I have a single helmfile that describes it. To target it, you can use kubectl config use-context mycluster and then helmfile sync to apply the helmfile to that targeted cluster. But perhaps there a way to apply helmfile to multiple clusters within one run of helmfile sync without having to switch contexts and re-run sync.

Some usability thoughts:

  • Could helmfile be used to define multiple helmfile.yaml's for different clusters, and then have all of them be applied in a single helmfile sync?
  • Could helmfile be used to define multiple a _single_ helmfile.yaml and then within that helmfile it have different kubectl targets under the releases: key?

Thanks very much for your time 馃憤

question

Most helpful comment

I've done this by having a helmfile at the root of a configuration monorepo, which delegates to other helmfiles. Each of the delegated helmfiles has a helmDefaults.kubeContext: foo set, which allows for each helmfile to set the appropriate context for each deployment.

There are only a few problems I've run into:

  • When having to do something weird like with CRDs and cert-manager, you have to use a presync hook to run kubectl label... commands. The kubectl that's ran by that hook is not ran in the _same_ context as what's being referred to in the helmDefaults.kubeContext... So that's a little problematic? But not the end of the world.
  • The output gets noisy when a helmfile delegates to other helmfiles; the List of updated releases : is on a helmfile-by-helmfile basis, so you don't get a rollup of all the updated releases in one aggregate view.
  • When you run helmfile --quiet sync on the "root" helmfile, it'll run _that_ helmfile as quiet... but any of the delegated helmfiles don't seem to respect the --quiet flag:
# THIS helmfile, e.g. helmfile.yaml, runs "quiet", but when the "delegate helmfiles" under `helmfiles:` run, they don't respect the --quiet flag:

helmfiles:
- path: config/cert-manager/helmfile.yaml
- path: config/scf/helmfile.yaml

Besides those issues, it's pretty easy to manage multiple clusters using helmfile (assuming you have all the proper kubectl config contexts on the machine that's running helmfile sync from)

Feel free to address any of the comments I made or close it whenever you like 馃憤 thanks!

All 6 comments

Just my two cents.

I really like the following approach:

{{ if eq .Environment.Name "dev" }}
  kubeContext: gke_XXXX_europe-west1-b_mydevcluster
{{ end }}

However it won't be a single command to sync stuff, but a sequence of commands like helmfile --environment=dev apply.

It's too bad as of now the latter doesn't work with the tillerless plugin: https://github.com/roboll/helmfile/issues/642

Multi-cluster support is the thing I would really like to have as a first-class citizen.


UPD: in my original post I made an assumption that it might be possible to set kubeContext per release. But inspecting the docs I doubt that, so I've removed a part of this message.

set kubeContext per release

It's possible since #682 :)

@aegershman Thanks a lot for your kind words and the positive feedback! It's really encouraging :)

First of all, I do want helmfile to deal with multiple clusters/contexts nicely. Probably I'll be open to add any features/changes to helmfile for that.

But perhaps there a way to apply helmfile to multiple clusters within one run of helmfile sync without having to switch contexts and re-run sync.

I have not thought about it throughoutly yet. But I believe it is possible.

At the simplest form, we can currently do:

releases:
- name: prod-app1
   kubeContext: prod
- name: preview-app1
  kubeContext: preview-app1

With go templates the possibility is endless.

I have a single helmfile that describes it

So you may have the root helmfile.yaml that delegates each environmnt to its own helmfile.yaml.

When you've kubeContext that is fixed per each environment helmfile.yaml, the root helmfile.yaml would look like the below:

values:
- values.yaml #contains {prod,preview}Version

helmfiles:
- path: git+ssh://[email protected]/yourorg/envs//[email protected]?v={{.Values.prodVersion}}
- path: git+ssh://[email protected]/yourorg/envs//[email protected]?v={{.Values.previewVersion}}

In case you have a "template" or "skeleton" helmfile.yaml that is used to generate environment helmfiles, the root would look like:

When you've kubeContext that is fixed per each environment `helmfile.yaml`, the root `helmfile.yaml` would looks like the below:

```yaml
values:
- values.yaml #contains {prod,preview}{TemplateVersion,KubeContext}

helmfiles:
- path: git+ssh://[email protected]/yourorg/envs//[email protected]?v={{.Values.prodTemplateVersion}}
  values:
  - kubeConext: {{.Values.prodKubeContext}}
  - prod.yaml
- path: git+ssh://[email protected]/yourorg/envs//[email protected]?v={{.Values.previewTemplateVersion}}
  values:
  - kubeConext: {{.Values.previewKubeContext}}
  - preview.yaml

// Note that the top-level values: is the secret feature added in #647, which is awaiting feedbacks from advanced users like you two, before documented!

So I believe you can try this today without waiting any improvement on helmfile side, unless you use tillerless.

I've done this by having a helmfile at the root of a configuration monorepo, which delegates to other helmfiles. Each of the delegated helmfiles has a helmDefaults.kubeContext: foo set, which allows for each helmfile to set the appropriate context for each deployment.

There are only a few problems I've run into:

  • When having to do something weird like with CRDs and cert-manager, you have to use a presync hook to run kubectl label... commands. The kubectl that's ran by that hook is not ran in the _same_ context as what's being referred to in the helmDefaults.kubeContext... So that's a little problematic? But not the end of the world.
  • The output gets noisy when a helmfile delegates to other helmfiles; the List of updated releases : is on a helmfile-by-helmfile basis, so you don't get a rollup of all the updated releases in one aggregate view.
  • When you run helmfile --quiet sync on the "root" helmfile, it'll run _that_ helmfile as quiet... but any of the delegated helmfiles don't seem to respect the --quiet flag:
# THIS helmfile, e.g. helmfile.yaml, runs "quiet", but when the "delegate helmfiles" under `helmfiles:` run, they don't respect the --quiet flag:

helmfiles:
- path: config/cert-manager/helmfile.yaml
- path: config/scf/helmfile.yaml

Besides those issues, it's pretty easy to manage multiple clusters using helmfile (assuming you have all the proper kubectl config contexts on the machine that's running helmfile sync from)

Feel free to address any of the comments I made or close it whenever you like 馃憤 thanks!

Saw lot of issues on on kube-context but didn't get a definitive answer.

helmDefaults:
{{ if eq .Environment.Name "prod" }}
kubeContext: test.kube1.net
{{ end }}

I have this kind of config in my helmfile but the context is not switching. Does that work with helm3 ?

I think this is a very good point, if the releases are kubeContext aware, the hooks linked to those releases should be context aware as well. After all, the hooks are designed to prep the cluster before the release and if the release is bound to the cluster most likely the prep command should point to that too.

Was this page helpful?
0 / 5 - 0 ratings