Currently helmfile does not take kubeContext at the release level into the uniqueness of a release. For a use-case where you want to deploy the same chart to many clusters you have to use dynamic release naming in order to circumvent this issue.
Repeatable failing test case:
releases:
{{- range (list "us-ashburn-1" "us-phoenix-1") }}
{{- $region := . }}
{{- range (list "controlplane" "dataplane") }}
{{- $okeKubeContext := (printf "%s_%s_%s" "dev" $region .) }}
- name: example-grafana
chart: bitnami/grafana
kubeContext: "{{ $okeKubeContext }}"
{{- end }}
{{- end }}
Error:
STDERR:
Error: release: already exists
From @voron in the chat:
voron
ah, dynamic releases.
I'm able to reproduce this issue - all the releases get the same kubeContext - the last generated one. It may be by design - you're on the same single helmfile environment, kubeContext isn't used as some uniq part to distinguish releases, and releases with the same name+namespace are merged in therms of kubeContex.
Interesting results are with
- name: example-grafana
chart: "{{ $okeKubeContext }}"
kubeContext: "{{ $okeKubeContext }}"
chart is uniq, while k8s context is the same in debug output. This leads to expected release: already exists
But as soon as "release name + namespace" are uniq - no issues - k8s context is uniq too.
- name: "{{ $okeKubeContext }}"
chart: bitnami/grafana
kubeContext: "{{ $okeKubeContext }}"
or
- name: example-grafana
chart: bitnami/grafana
namespace: "{{ $okeKubeContext }}"
kubeContext: "{{ $okeKubeContext }}"
@jwitko @voron Hey! Thank you so much for your detailed report.
Indeed that the issue was that when you have "duplicate" release in terms of release namespace and name, the last occurrence of the release was repeated internally.
I thought I had fixed it in #1390, but turned out #1390 fixed the problem only for the unique release id computation for the helmfile's internal release duplication check. Helmfile has a similar unique release ID computation for computing DAG of releases. It lacked kubeContext in the computed ID which resulted in this issue.
Most helpful comment
@jwitko @voron Hey! Thank you so much for your detailed report.
Indeed that the issue was that when you have "duplicate" release in terms of release namespace and name, the last occurrence of the release was repeated internally.
I thought I had fixed it in #1390, but turned out #1390 fixed the problem only for the unique release id computation for the helmfile's internal release duplication check. Helmfile has a similar unique release ID computation for computing DAG of releases. It lacked kubeContext in the computed ID which resulted in this issue.
1442 fixes it.