In our releases we build dynamic needs: parameters sometimes. An example of that looks like:
needs:
# Generate list of releases that thanos has a dependency on
{{- range $realmRegions }}
{{- $regionMap := . }}
{{- $region := .name }}
{{- $region_key := .key }}
{{- range $okeClusters }}
{{- $cluster := . }}
{{- if and ($regionMap | get "deploy_region" false) (eq $cluster "deployment") }}
- infra-monitoring/prometheus-oper-{{ $region_key }}-{{ $cluster }}
{{- else if ne $cluster "deployment" }}
- infra-monitoring/prometheus-oper-{{ $region_key }}-{{ $cluster }}
{{- end }}
{{- end }}
{{- end }}
I know we could just use separate helmfiles, but if we ignore that for now I'd like to note that this method worked in 0.125.8 and it does not in 0.125.9. The error reported is simply that a release built into the needs: parameter does not exist but in reality it does. Here is the relevant output from the lint (which does not error):
0: templates:
1: prometheus_default: &prometheus_default
2: chart: bitnami/prometheus-operator
3: version: 0.31.0
4: namespace: infra-monitoring
5: # Allow installing CRDs and CRs in the same chart
6: disableValidation: true
....
18: releases:
19: # Iterate over all regions in the realm
20: # If region is marked as deploy_region, append deployment OKE type
21:
22: # Nested loop iterating over OKE cluster types per region
23:
24: - name: prometheus-oper-iad-controlplane
25: <<: *prometheus_default
26: kubeContext: "preprod_us-ashburn-1_controlplane"
27: labels:
....
And the error from templating on the same file:
err: "preprod_us-phoenix-1_deployment/infra-monitoring/thanos" depends on nonexistent release "infra-monitoring/prometheus-oper-iad-controlplane"
changing working directory back to "/home/jmwitkow/osvc-infra/helm/helmfile-releases"
I was able to narrow down the break as happening in to v0.125.9 release. It also occurs in the latest release.
@jwitko I've also replied in our Slack channel but anyway - I think this is a side-effect of #1442.
Can you prepend preprod_us-ashburn-1_controlplane/ to needs entries?
Sorry for the breaking change, but I think it's much better to fail like this so that you can notice that missing the kubecontext part in the needs entries isn't "correct".
Perhaps we can make the error message a bit more informative though? What if the error message included the list of available release IDs?
@mumoshu is there a way to infer the kubecontext using template variable then? Because I am choosing it via gcloud CLI and just want to use the current kubecontext.
For example:
needs:
- cluster-system/cert-manager
How would I prepend the context name here, on the condition that I am not overriding the default one? Thank you
@eenchev Hey! I admit it is wasn't clear, but if you omit kubeContext under releases[] in your helmfile.yaml, you don't need to add the kubeContext part to your needs entry either. So, in your case I believe you can just omit the kubeContext part in the needs entry, and use something like cluster-system/cert-manager.
Will confirm the above solution (ty by the way!) on Tuesday when I get back to the office and close if its all good
Hey there,
I also get the same in ./helmfile.yaml: in .helmfiles[0]: in releases/wks-dev/helmfile.yaml: "sci-wks-dev/monitoring/prometheus-party" depends on nonexistent release "monitoring/promparty-auth"
- <<: *default
name: prometheus-party
namespace: monitoring
chart: prometheus-community/kube-prometheus-stack
needs:
- monitoring/promparty-auth
- <<: *default
name: promparty-auth
namespace: monitoring
chart: incubator/raw
The default template has kubeContext explicitly set, without it works fine.
Yea, sorry I never commented back. @mumoshu , this worked for me and its no longer an issue.
@dennybaa if you set kubecontext you have to add as a prefix to the needs: statement.
- <kubecontext>/monitoring/promparty-auth
@mumoshu we've also came across this issue while upgrading. Why is this change not being reverted? A patch/fix version should not introduce breaking changes, and if it does they should be rolled back rather than suggesting a workaround.
We cannot remove kubecontext from our releases as it is necessary and in addition we can't start adding the kube context prefix to the needs block in all releases which make it not possible to upgrade helmfile for us.
Can we please consider rolling back this change?
@dudicoco I hear you.
But not including kubecontext for release uniqueness is incorrect. And we are still 0.x which allows breaking changes, although I don't want to break things without important reason like this one.
What if we added some compatibility feature that Helmfile fallback to the old-style release ID if the new, kubecontext-prepended release ID does not exist?
Thanks @mumoshu!
I was able to find a workaround in our case by prefixing the kube context automatically for all releases:
{{ with .needs }}
needs:
{{ range $item := . }}
- {{ printf "%s/%s" $kubecontext $item }}
{{ end }}
{{ end }}
This generates:
needs:
- arn:aws:eks:us-east-1:xxxxxxxxxxxxxx:cluster/dev/kube-system/aws-node
- arn:aws:eks:us-east-1:xxxxxxxxxxxxxx:cluster/dev/kube-system/kube-dns
- arn:aws:eks:us-east-1:xxxxxxxxxxxxxx:cluster/dev/kube-system/kube-proxy
Do you think the / within the context can cause issues?
In any case, it could be beneficial to add the compatibility feature, for other people who might have a hard time dealing with the new change.
Most helpful comment
@eenchev Hey! I admit it is wasn't clear, but if you omit
kubeContextunderreleases[]in yourhelmfile.yaml, you don't need to add the kubeContext part to yourneedsentry either. So, in your case I believe you can just omit the kubeContext part in the needs entry, and use something likecluster-system/cert-manager.