Helmfile: Importing existing k8s resources into a release

Created on 29 May 2020  路  13Comments  路  Source: roboll/helmfile

I would like to import existing resources into my release.
This new helm feature allows you to adopt existing resources by annotating them: https://github.com/helm/helm/pull/7649

I was thinking of implementing this using the new Kustomize feature https://github.com/roboll/helmfile/pull/1172 but I couldn't get it to work.
Is it possible to use this feature just for patching existing k8s resources? If not, is there a different way to achieve this?

Thanks

Most helpful comment

Would love to follow up on this. I'd like to see a way to modify resources like the EKS installed coredns

All 13 comments

@dudicoco Hey! Sorry but I don't get it. For example, how would you expect your helmfile.yaml to look like when importing existing k8s resources? That may help me understanding what you are trying to.

Anyway, I was thinking that you would just manually modify your existing resources with kubectl annotate and then run helmfile apply as usual, so that helmfile/helm will adopt resources like explained in helm/helm#7649. Im not sure how Helmfile can help that.

Hey @mumoshu, thanks for the reply! The helmfile.yaml will look like this:

releases:
- name: foo
  chart: my-repo/foo
  version: 1.0.0
  namespace: default
  installed: true
  jsonPatches:
  - target:
      version: v1
      kind: Deployment
      name: foo
      namespace: default
    patch:
    - op: replace
      path: /metadata/annotations
      value:
      - meta.helm.sh/release-name: foo
      - meta.helm.sh/release-namespace: default
    - op: replace
      path: /metadata/labels
      value:
      - app.kubernetes.io/managed-by: Helm

Obviously I could patch the resources manually, but I would like the process to be automated with infrastructure as code.
When creating a new cluster with AWS EKS (and I assume with other providers as well) there are many resources that are created by default, such as aws-node and coredns which I would like to manage using helm. A manual step during the provisioning process is less ideal.

@dudicoco Thanks!

I believe you need another mechanism than jsonPatches for that. jsonPatches in helmfile works by patching "desired" resources to be applied. On the other hand, what you need to let helm adopt existing resources is to patch existing/live resources.

I do understand your use-case though. If I were you, I would probably enhance eksctl OR terraform-provider-eksctl OR helmfile to enable you to patch live resources. But implementing that in Helmfile seems to bloat Helmfile's scope? 馃 WDYT?

@mumoshu I wonder if this feature would provide the solution:
https://github.com/roboll/helmfile/pull/746

Not sure how it works exactly.

Please let met me dive into my memory... anyway, I might have possibly missed porting that feature in #1172 so it may not work now

746 was intended to work by importing existing resources before the first helm upgrade --install run, and the imported resources are specified via a list of NAMESPACE/KIND/NAME entries.

Would that work for you if it worked as advertised?

I thought it was used like this:

releases:
- name: aws-auth
   chart: ...
   adopt:
   - kube-system/configmap/aws-auth

@mumoshu i'm getting the following error: Error: unknown flag: --adopt

Perhaps using the helm-x binary is needed for this feature to work?

Regarding patching via terraform, this is not possible at the moment:
https://github.com/terraform-providers/terraform-provider-kubernetes/issues/723

Yes, I believe you're correct. But on the other hand I thought helm x --adopt supported Helm 2. Probably I need to rework on that.

Otherwise letting Helmfile to leverage helm 3's ability to adopt resources would be nice as well.

Thanks @mumoshu.
Do you think this feature might be added any time soon?

In the meantime, I have found a workaround using hooks:

  hooks:
  - events:
    - presync
    showlogs: true
    command: kubectl
    args:
    - --context={{ $kubecontext }}
    - --namespace={{ $namespace }}
    - annotate
    - serviceaccount
    - my-svc
    - meta.helm.sh/release-name={{ .name }}
    - meta.helm.sh/release-namespace={{ $namespace }}
  - events:
    - presync
    showlogs: true
    command: kubectl
    args:
    - --context={{ $kubecontext }}
    - --namespace={{ $namespace }}
    - label
    - serviceaccount
    - my-svc
    - app.kubernetes.io/managed-by=Helm

This is working as expected, however I have a problem - as part of our CI we are running helmfile apply --args "--dry-run".
This command will actually trigger the hook during the CI test, a solution for that would be to add an if block to not include the hook during a dry run, but then helm would fail on conflict with existing resources: Error: rendered manifests contain a resource that already exists..
Do you have an advise on how to handle this chicken and egg problem?

Would love to follow up on this. I'd like to see a way to modify resources like the EKS installed coredns

@abatilo i've ended up importing the resources with a script on all of our existing clusters (just annotate and add labels to the resources).
I've also created a script which deletes the EKS resources (coredns, kube-proxy, aws-node) on new clusters prior to installing these components with helmfile.

I鈥檓 a bit confused as this seems to automagically work for me already. I migrated the storage for some of my releases (and will soon write a blog post about that, can link it here if anybody is interested).

During that, I deleted the PVC resource and later recreated it (for the new PV where the data was migrated to) without the labels helm uses to denote ownership (managed-by etc.).

After that, I had a PVC with the same name as the old one, but without the labels/annotations for helm.

I then executed helmfile diff which came back empty as the helm release did not change (but the cluster state did).

To then rectify the missing labels, I ran helmfile sync, after which the labels were present (and still are).

I have a gut feeling that this has to do with https://github.com/helm/helm/pull/7649, but if somebody can clarify why that worked for me, I鈥檇 be really happy!

I鈥檓 running:

  • helm v3.4.2
  • helmfile v0.135.0
Was this page helpful?
0 / 5 - 0 ratings

Related issues

sstarcher picture sstarcher  路  3Comments

mumoshu picture mumoshu  路  4Comments

ivandardi picture ivandardi  路  3Comments

aslafy-z picture aslafy-z  路  4Comments

madAndroid picture madAndroid  路  3Comments