After running helmfile sync the first time, the release was installed successfully.
However, a second run generates this error.
FAILED RELEASES:
NAME
frontend
in ./helmfile.yaml: failed processing release frontend: helm exited with status 1:
client.go:399: Replaced "frontend-nginx-ingress-controller" with kind Deployment for kind Deployment
client.go:399: Replaced "frontend-nginx-ingress-default-backend" with kind Deployment for kind Deployment
client.go:399: Replaced "frontend-nginx-ingress-controller" with kind Deployment for kind Deployment
client.go:399: Replaced "frontend-nginx-ingress-default-backend" with kind Deployment for kind Deployment
Error: UPGRADE FAILED: an error occurred while rolling back the release. original upgrade error: failed to replace object: Service "frontend-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "frontend-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable: failed to replace object: Service "frontend-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "frontend-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable
I suspect it might be realted to the fact that I'm using helm v3. Could this be part of the incompatibility? My cluster is not running tiller ever since I migrated to helm v3.
EDIT: Using
Apologies for the unrelated issue.
This seems to be a problem with the chart or helm v3 itself. It happens regardless of using helmfile or not.
Closing.
For reference:
It is problem with 90% charts inherited from Helm 2. @juliohm1978 are you *ing jocking ? Fix Helm2 to Helm3 Chart migration then. Don't close eyes on very commented and top issue of Helm 3. Migration should be smooth. It's reasonable to display deprication message or something like that. Not "error: fuck you all Helm 2 users!". Just silencing issue won't help. We will open 10 new issues like that.
@juliohm1978 For reference #googlehelm3sucks
facing the same issue, with all charts migrated from helm 2 to helm 3
Probably setting force to false will help. It helped us, for instance.
Also, you might want to check out this thread: https://github.com/helm/helm/issues/6378
hey @mumoshu I have a chart that doesn't declare clusterIP either and I'm running into this w/ helm 3, I know this issue is closed, but it seems to be related maybe w/ the use of force and transitioning from helm2 to helm3 and what force means in helm3 vs helm2? any insights?
i.e. i get this, this is w/ force: false too...
Upgrading release=myapp-stage-c1-8-4-3-1--2-1, chart=bitsofinfo-appdeploy/appdeploy
exec: helm3 upgrade --install --reset-values myapp-stage-c1-8-4-3-1--2-1 bitsofinfo-appdeploy/appdeploy --version 1.1.15 --wait --timeout 900s --force --namespace my-apps --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values376777632 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values382279807 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values559379922 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values213438729 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values549035732 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values408969251 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values333445414 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values876916301 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values563347272 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values601963527 --history-max 10
exec: helm3 upgrade --install --reset-values myapp-stage-c1-8-4-3-1--2-1 bitsofinfo-appdeploy/appdeploy --version 1.1.15 --wait --timeout 900s --force --namespace my-apps --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values376777632 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values382279807 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values559379922 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values213438729 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values549035732 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values408969251 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values333445414 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values876916301 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values563347272 --values /var/folders/j1/_6q1h_w13mqgpxd5l2rcr0cm0000gs/T/values601963527 --history-max 10:
worker 1/1 finished
FAILED RELEASES:
NAME
myapp-stage-c1-8-4-3-1--2-1
err: release "myapp-stage-c1-8-4-3-1--2-1" in "deployments.helmfile.yaml" failed: failed processing release myapp-stage-c1-8-4-3-1--2-1: helm3 exited with status 1:
coalesce.go:165: warning: skipped value for env: Not a table.
coalesce.go:199: warning: destination for limits is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for requests is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for annotations is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for labels is a table. Ignoring non-table value <nil>
Error: UPGRADE FAILED: failed to replace object: Service "myapp-stage-c1-8-4-3-1--2-1" is invalid: spec.clusterIP: Invalid value: "": field is immutable
in ./deployments.helmfile.yaml: failed processing release myapp-stage-c1-8-4-3-1--2-1: helm3 exited with status 1:
coalesce.go:165: warning: skipped value for env: Not a table.
coalesce.go:199: warning: destination for limits is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for requests is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for annotations is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for labels is a table. Ignoring non-table value <nil>
Error: UPGRADE FAILED: failed to replace object: Service "myapp-stage-c1-8-4-3-1--2-1" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Just got to know about this. Can we do anything other than fixing charts like explained in https://github.com/helm/helm/issues/6378#issuecomment-557746499?
Well that guys comment is a bit of a specific example. I'm not even defining an explicit spec.clusterIP anywhere in the current release and I am getting this.
It does seem to be better if I set force: false... but I think its some semantics in helm in the way in v3 they implement force vs how it behaved in v2....
see: https://github.com/helm/helm/issues/7082 and https://github.com/helm/helm/pull/7431
i'm not sure if helmfile can do anything or maybe note it in the docs. I'm still trying to better understand it myself. Its just unfortunate as right now is a blocker.
@mumoshu https://github.com/helm/helm/issues/7082#issuecomment-590612617
@bitsofinfo Thanks a lot for sharing!
I think we want two things. First, we should fix the empty clusterIP issue as that makes no sense. This can be done either by fixing charts, or by enhancing helmfile to automatically remove clusterIP: "" lines from the generated manifests before installing.
For cases we do want the "upgrade or replace" behaviour, as --force doesn't work as the same as it was in Helm 2, we should wait for https://github.com/helm/helm/pull/7431 to land and helmfile should be enhanced to use --recreate or --force --recreate for force: true.
well not sure. From what I can gather it's not always the charts issue for example I'm not generating a clusterIP in the release I'm dealing with. It also appears to vary and affect other k8s resources per those helm issues. Its definitely not a helmfile bug, but yeah I guess something might be able to be done in helmfile to helm mitigate it until helm figure out their final resolution to it.
@mumoshu another interesting note https://github.com/helm/helm/issues/7350
Can you re-open this issue, or should I create another issue in helmfile just to track this if people stumble on it or just in case helmfile wants to add some workarounds?
Very late apologies, everyone. This clearly seems to also be related to helm3 for some use other cases than the one I mentioned originally. I'm reopening for the sake of everyone else. But I'm not at all sure how to reproduce this right now. I have long moved to helm3 by either fixing charts or just plain helm del && helm install all over.
Any reproducible examples are welcome.
I find this very easy to reproduce:
$ helm version
version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.7"}
$ helm create test
Creating test
$ helm upgrade --install test ./test --wait --force --namespace testnamespace
Release "test" does not exist. Installing it now.
NAME: test
LAST DEPLOYED: Fri Aug 28 18:44:27 2020
NAMESPACE: testnamespace
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace testnamespace -l "app.kubernetes.io/name=test,app.kubernetes.io/instance=test" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace testnamespace port-forward $POD_NAME 8080:80
$ helm upgrade --install test ./test --wait --force --namespace lutz028
Error: UPGRADE FAILED: failed to replace object: Service "test" is invalid: spec.clusterIP: Invalid value: "": field is immutable
$ ls -l ~/.helm/starters
total 0
Most helpful comment
Probably setting
forcetofalsewill help. It helped us, for instance.