When running helmfile sync with following Yaml
repositories:
- name: bitnami
url: https://charts.bitnami.com/bitnami
caFile: bitnami.pem
helmDefaults:
args:
- "--debug"
atomic: true
cleanupOnFail: true
skipDeps: false
releases:
- name: cassandra-bit
namespace: cassandra
labels:
team: noops
chart: bitnami/cassandra
values:
- values-cassandra.yaml
jsonPatches:
- target:
version: apps/v1
kind: StatefulSet
name: cassandra-bit
patch:
- op: add
path: /spec/template/metadata/annotations
value:
deonPatchesloyment/timestamp: "{{ now | unixEpoch }}"
helmfile exited with error it cant find the only dependency of the chart.
Adding repo bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
running helm fetch bitnami/cassandra --untar -d /tmp/chartify775686341
using requirements.yaml:
{}
options: {false [/tmp/values195671422] [] cassandra true}
running helm template --debug=false --include-crds --output-dir /tmp/chartify775686341/cassandra/helmx.1.rendered -f /tmp/chartify775686341/cassandra/values.yaml -f /tmp/values195671422 --namespace cassandra cassandra-bit /tmp/chartify775686341/cassandra
patching files: [/tmp/chartify775686341/cassandra/templates/networkpolicy.yaml /tmp/chartify775686341/cassandra/templates/cassandra-secret.yaml /tmp/chartify775686341/cassandra/templates/headless-svc.yaml /tmp/chartify775686341/cassandra/templates/service.yaml /tmp/chartify775686341/cassandra/templates/statefulset.yaml]
jsonpatches/patch.0.yaml:
- op: add
path: /spec/template/metadata/annotations
value:
deonPatchesloyment/timestamp: "1611071318"
generated and using kustomization.yaml:
kind: ""
apiversion: ""
resources:
- templates/networkpolicy.yaml
- templates/cassandra-secret.yaml
- templates/headless-svc.yaml
- templates/service.yaml
- templates/statefulset.yaml
patchesJson6902:
- target:
kind: StatefulSet
name: cassandra-bit
version: apps/v1
path: jsonpatches/patch.0.yaml
Generating /tmp/chartify775686341/cassandra/all.patched.yaml
running kustomize build /tmp/chartify775686341/cassandra --output /tmp/chartify775686341/cassandra/all.patched.yaml
Removing /tmp/chartify775686341/cassandra/templates
Removing /tmp/chartify775686341/cassandra/charts
Removing /tmp/chartify775686341/cassandra/crds
Removing /tmp/chartify775686341/cassandra/strategicmergepatches
Removing /tmp/chartify775686341/cassandra/kustomization.yaml
Affected releases are:
cassandra-bit (/tmp/chartify775686341/cassandra) UPDATED
Upgrading release=cassandra-bit, chart=/tmp/chartify775686341/cassandra
Release "cassandra-bit" does not exist. Installing it now.
FAILED RELEASES:
NAME
cassandra-bit
in ./helmfile.yaml: failed processing release cassandra-bit: command "/home/opdt/.local/bin/helm" exited with non-zero status:
PATH:
/home/opdt/.local/bin/helm
ARGS:
0: helm (4 bytes)
1: upgrade (7 bytes)
2: --install (9 bytes)
3: --reset-values (14 bytes)
4: cassandra-bit (13 bytes)
5: /tmp/chartify775686341/cassandra (32 bytes)
6: --atomic (8 bytes)
7: --cleanup-on-fail (17 bytes)
8: --create-namespace (18 bytes)
9: --namespace (11 bytes)
10: cassandra (9 bytes)
11: --values (8 bytes)
12: /tmp/values755635552 (20 bytes)
13: --history-max (13 bytes)
14: 10 (2 bytes)
15: --debug (7 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
history.go:53: [debug] getting history for release cassandra-bit
install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /tmp/chartify775686341/cassandra
Error: found in Chart.yaml, but missing in charts/ directory: common
helm.go:81: [debug] found in Chart.yaml, but missing in charts/ directory: common
helm.sh/helm/v3/pkg/action.CheckDependencies
/home/circleci/helm.sh/helm/pkg/action/install.go:606
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:215
main.newUpgradeCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:114
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
COMBINED OUTPUT:
history.go:53: [debug] getting history for release cassandra-bit
Release "cassandra-bit" does not exist. Installing it now.
install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /tmp/chartify775686341/cassandra
Error: found in Chart.yaml, but missing in charts/ directory: common
helm.go:81: [debug] found in Chart.yaml, but missing in charts/ directory: common
helm.sh/helm/v3/pkg/action.CheckDependencies
/home/circleci/helm.sh/helm/pkg/action/install.go:606
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:215
main.newUpgradeCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:114
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
if i remove the block
jsonPatches:
- target:
version: apps/v1
kind: StatefulSet
name: cassandra-bit
patch:
- op: add
path: /spec/template/metadata/annotations
value:
deonPatchesloyment/timestamp: "{{ now | unixEpoch }}"
everything works fine ... except the Patch, of course.
```
{Version:kustomize/v3.9.2 GitCommit:e98eada7365fc564c9aba392e954f306a9cbf1dd BuildDate:2021-01-17T17:44:16Z GoOs:linux GoArch:amd64}
version.BuildInfo{Version:"v3.4.2", GitCommit:"23dd3af5e19a02d4f4baa5b2f242645a1a3af629", GitTreeState:"clean", GoVersion:"go1.14.13"}
helmfile version v0.135.0
```
Could be in some way related to this one https://github.com/roboll/helmfile/issues/1277#issuecomment-762750238?
It's not the same issue but the root cause seems new dependency management recently introduced for both.
With strategicMergePatches everything works fine - no dependency errors.
But i need the ability to delete.
I'm not sure if this related to #1277
The problem seems that once you've defined a jsonPatches, _helmfile_ builds up a temporary chart using chartify and after applying the layers with kustomization and ready to deploy, it doesn't fetch the remote bitnami/common Helm chart, defined in the upstream Helm chart as a dependency.
I've a similar issue as I'd like to use incubator/raw as a dependency for workaround the lack of creating brand new k8s objects (e.g. ServiceMonitor) without define a separate release.
With
strategicMergePatcheseverything works fine - no dependency errors.
But i need the ability to delete.
I'm not sure if this related to #1277
@jmederer
Can you confirm that it does work with strategicMergePatches.
I've just tried locally and it also fails, just like with jsonPatches.
@dalbani, this is my test. And its still runs without error.
this was my strategicMergePatches test
1 helmDefaults:
2 args:
3 - "--debug"
4
5 releases:
6 - name: cassandra-3
7 namespace: cassandra
8 labels:
9 team: noops
10 chart: charts/bitnami/cassandra
11 values:
12 - values-3.yaml
13 strategicMergePatches:
14 - apiVersion: apps/v1
15 kind: StatefulSet
16 metadata:
17 name: cassandra-3
18 namespace: cassandra
19 labels:
20 opdttest: MiSstrategicMergePatchest
21 spec:
22 template:
23 spec:
24 hostNetwork: true
25 containers:
26 - name: cassandra
27 env:
28 - name: test_3
29 value: worksfine
30 ports:
31 - name: intra
32 containerPort: {{ .Values.service.port }}
But the Problem here is for the stratetigMerge the KeyWord at PORTs its not the Key NAME its the VALUE of containerPort.
for ex
in the Tempate
ports:
- name: intra
containerPort: 8000
in strategicMerge you add
ports:
- name: intra
containerPort: 9000 # <- Your new desired PORT for intra Node communication
The Result is:
ports:
- name: intra
containerPort: 8000
ports:
- name: intra
containerPort: 9000
I'm running into the same issue with the scyllla-operator/helm/scylla-manager which has a dependency on the Scylla chart. Neither strategicMergePatches or jsonPatches seem to work
COMBINED OUTPUT:
install.go:173: [debug] Original chart version: "0.0.0"
install.go:190: [debug] CHART PATH: /var/folders/st/4gfyr5j155g15z2c3gz9019c0000gn/T/chartify025221559/scylla-manager-scylla-manager-67f8b79f76/scylla-manager
Error: found in Chart.yaml, but missing in charts/ directory: scylla
helm.go:81: [debug] found in Chart.yaml, but missing in charts/ directory: scylla
❯ kustomize version
{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T22:48:12+00:00 GoOs:darwin GoArch:amd64}
❯ helm version
version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.16"}
❯ helmfile version
helmfile version v0.138.7
Is there an idea how a fix would look like? If that's the case I could give a go and see if I could implement the fix
same problem here.
broken for strategicMergePatches and jsonPatches.
Example helmfile.yaml
repositories:
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
releases:
- name: metrics
namespace: monitoring
chart: prometheus-community/kube-prometheus-stack
version: 13.13.0
jsonPatches:
- target:
version: v1
kind: Service
name: metrics-kube-prometheus-st-kube-etcd
namespace: kube-system
patch:
- op: replace
path: /spec/ports/0/port
value: "2381"
- op: replace
path: /spec/ports/0/targetPort
value: "2381"
The error is
Templating release=metrics, chart=/tmp/chartify882334548/monitoring-metrics-64955bd8ff/kube-prometheus-stack
in ./helmfile.yaml: command "/usr/sbin/helm" exited with non-zero status:
PATH:
/usr/sbin/helm
ARGS:
0: helm (4 bytes)
1: template (8 bytes)
2: metrics (7 bytes)
3: /tmp/chartify882334548/monitoring-metrics-64955bd8ff/kube-prometheus-stack (74 bytes)
4: --version (9 bytes)
5: 13.13.0 (7 bytes)
6: --namespace (11 bytes)
7: monitoring (10 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: found in Chart.yaml, but missing in charts/ directory: kube-state-metrics, prometheus-node-exporter, grafana
COMBINED OUTPUT:
Error: found in Chart.yaml, but missing in charts/ directory: kube-state-metrics, prometheus-node-exporter, grafana
I tried the workaround helmfile template --args="--set kubeStateMetrics.enabled=false,nodeExporter.enabled=false,grafana.enabled=false --dependency-update, but this fails because stdout and stderr get mixed togehter (the dependency download shows up in the middle of the manifests).
Looking at the temporary chart for chartify, the dependencies are still listen within the Chart.yaml but do not show in the charts directory.
I believe it would be a fix to
a) add empty chart mocks with the names of the dependencies in the temporary chart directory or
b) remove the list of dependencies from Charts.yaml
What do you think?
I implemented a quick test/fix using yq within the Chartify function to remove the dependencies from Charts.yaml.
if u.SkipDeps {
cmd := exec.Command("yq", "eval", "del(.dependencies)", "-i", chartYamlPath)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Run()
if err != nil {
return "", err
}
}
It solves the issue in this case.
But it uncovered a new issue: when using patches CRDs are always included. Flags to exclude the CRDs in helmfile or helm do not have any effect.
If I understand the original issue correctly, we shouldn't omit dependecies from the temporary chart as doing so would- break the chart? Perhaps the right way forward would be to make chartify call helm dep build first so it doesn't fail while helm template?
I think the problem is that chartify removes the charts folder while lives Chart.yaml with dependencies inside.
Neither --skip-deps works, even though the helm chart is packed with their dependencies.
We should support --skip-deps and chartify should copy the original charts folder in the new temp chart.
helmfile -e dev sync --skip-deps
running helm fetch bitnami/postgresql --untar -d /tmp/chartify893720333/sample-app-dev-database-76b49cd894 --version 10.3.13
using requirements.yaml:
{}
options: {false [/tmp/helmfile143300244/sample-app-dev-database-values-6ffc655d47 /tmp/helmfile296865507/sample-app-dev-database-values-d775c774 /tmp/helmfile624071910/sample-app-dev-database-values-7cf955576d] [] sample-app-dev 10.3.13 true}
running helm template --debug=false --include-crds --output-dir /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/helmx.1.rendered -f /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/values.yaml -f /tmp/helmfile143300244/sample-app-dev-database-values-6ffc655d47 -f /tmp/helmfile296865507/sample-app-dev-database-values-d775c774 -f /tmp/helmfile624071910/sample-app-dev-database-values-7cf955576d --namespace sample-app-dev database /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql
patching files: [/tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/templates/statefulset.yaml /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/templates/secrets.yaml /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/templates/svc-headless.yaml /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/templates/svc.yaml]
generated and using kustomization.yaml:
kind: ""
apiversion: ""
resources:
- templates/statefulset.yaml
- templates/secrets.yaml
- templates/svc-headless.yaml
- templates/svc.yaml
transformers:
- transformers/transformer.0.yaml
Generating /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/all.patched.yaml
running kustomize build /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql --output /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/all.patched.yaml
Removing /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/templates
Removing /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/charts
Removing /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/crds
Removing /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/strategicmergepatches
Removing /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql/kustomization.yaml
Affected releases are:
database (/tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql) UPDATED
Upgrading release=database, chart=/tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql
Release "database" does not exist. Installing it now.
FAILED RELEASES:
NAME
database
in ./helmfile.yaml: failed processing release database: command "/usr/bin/helm" exited with non-zero status:
PATH:
/usr/bin/helm
ARGS:
0: helm (4 bytes)
1: upgrade (7 bytes)
2: --install (9 bytes)
3: --reset-values (14 bytes)
4: database (8 bytes)
5: /tmp/chartify893720333/sample-app-dev-database-76b49cd894/postgresql (68 bytes)
6: --version (9 bytes)
7: 10.3.13 (7 bytes)
8: --wait (6 bytes)
9: --timeout (9 bytes)
10: 600s (4 bytes)
11: --namespace (11 bytes)
12: sample-app-dev (14 bytes)
13: --values (8 bytes)
14: /tmp/helmfile467680520/sample-app-dev-database-values-57b8c8c485 (64 bytes)
15: --values (8 bytes)
16: /tmp/helmfile483091143/sample-app-dev-database-values-6b48449db8 (64 bytes)
17: --values (8 bytes)
18: /tmp/helmfile845181818/sample-app-dev-database-values-9778489db (63 bytes)
19: --history-max (13 bytes)
20: 10 (2 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: found in Chart.yaml, but missing in charts/ directory: common
COMBINED OUTPUT:
Release "database" does not exist. Installing it now.
Error: found in Chart.yaml, but missing in charts/ directory: common
chartify removes the charts folder while lives Chart.yaml with dependencies inside.
@strainovic Thanks for the investigation! It does seem problematic.
A bit of context: chartify removes charts folder to avoid duplicating manifests from charts. The patching basically work by running helm template > all.yaml and patching all.yaml with kustomize build. all.yaml contains all the resources from the charts directory. Not removing charts from the temporary chart produced by chartify, the final helm template call made by Helmfile renders manifests from charts again, which result in duplication. So chartify removing charts directory is ok.
I'll reread chartify code and try to see how I can verify/fix that.
Please see https://github.com/variantdev/chartify/commit/07c3054d66021fa190464a998ade18607312c715 and #1759 for the fix.
Thanks again for sharing your insight @strainovic! It helped a lot!
@damoon Hey! You did seem to have uncovered yet another bug in chartify.
when using patches CRDs are always included. Flags to exclude the CRDs in helmfile or helm do not have any effect.
chartify seem to have been adding --include-crds regardless of how you configure helmfile:
https://github.com/variantdev/chartify/blob/07c3054d66021fa190464a998ade18607312c715/replace.go#L62
Probably it was originally to retain compatibility between helm2 and helm3 as helm2 had no option to disable rendering CRDs(right?).
It doesn't make sense now as helmfile has --inlude-crds
@mumoshu using jsonPatches/strategicMergePatches fails for me with a CRD related error:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: CustomResourceDefinition "alertmanagerconfigs.monitoring.coreos.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "monitoring"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "monitoring"
This is with the latest helmfile built from sources. The same helmfile works fine without jsonPatches/strategicMergePatches
Should I create a new issue or is it the same problem?
@azkore Hey! Yours seem to be the same issue as https://github.com/roboll/helmfile/pull/1771#issuecomment-817262958. Could you confirm? I'll fix it tomorrow.
@azkore #1774 has been merged to fix it. Would you mind giving it a shot?
@mumoshu It didn't help, unfortunately. helmfile apply installs kube-prometheus-stack without any issue until I add a test snippet:
strategicMergePatches:
- apiVersion: v1
kind: ConfigMap
metadata:
name: monitoring-prometheus-oper-proxy
namespace: monitoring
data:
bar: BAR
But if I add it there is an error:
ARGS:
0: helm (4 bytes)
1: upgrade (7 bytes)
2: --install (9 bytes)
3: --reset-values (14 bytes)
4: monitoring (10 bytes)
5: /home/az/.cache/chartify669318951/monitoring-monitoring-6ff4b74c7f/kube-prometheus-stack (88 bytes)
6: --version (9 bytes)
7: ~14.4.0 (7 bytes)
8: --create-namespace (18 bytes)
9: --namespace (11 bytes)
10: monitoring (10 bytes)
11: --values (8 bytes)
12: /home/az/.cache/helmfile343610699/monitoring-monitoring-values-7f9dd5fc5f (73 bytes)
13: --values (8 bytes)
14: /home/az/.cache/helmfile911285294/monitoring-monitoring-values-cf7564bcf (72 bytes)
15: --values (8 bytes)
16: /home/az/.cache/helmfile942005173/monitoring-monitoring-values-685fb7fbd (72 bytes)
17: --history-max (13 bytes)
18: 10 (2 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: CustomResourceDefinition "alertmanagerconfigs.monitoring.coreos.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "monitoring"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "monitoring"
COMBINED OUTPUT:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: CustomResourceDefinition "alertmanagerconfigs.monitoring.coreos.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "monitoring"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "monitoring"
Also before proceeding helmfile shows a huge diff with CRDs it plans to install anew though they were already installed as a part of the same release before.
Is it a separate issue maybe?
@azkore Ah sorry I misread your original comment in the first place. Your problem seems to be a different one. Would you mind submitting a separate issue with a small example helmfile.yaml for reproduction? Thanks!
@mumoshu sure, https://github.com/roboll/helmfile/issues/1778