helm upgrade fails with spec.clusterIP: Invalid value: "": field is immutable

Created on 20 Apr 2020  ·  64Comments  ·  Source: helm/helm

When issue helm upgrade, it shows errors like, ("my-service" change from "clusterIP: None" to "type: LoadBalancer" without field clusterIP)

Error: UPGRADE FAILED: Service "my-service" is invalid: spec.clusterIP: Invalid value: "": field is immutable 

However, all other pods with new version are still going to be restarted, except that "my-service" Type does not change to new type "LoadBalancer"

I understand that why upgrade failed because helm does not support changing on some certain fields. But why helm still upgrades other services/pods by restarting it. Should helm does nothing if there is any error during the upgrade? I excepted helm to treat the whole set of services as a package to either upgrade all or none, but seems my expectation might be wrong.

And if we ever end up in such situation, then what we should to get out the situation? like how to upgrade "my-service" to have new type?

And if I use --dry-run option, helm does not show any errors.

Is this consider a bug or expected, i.e. upgrade throws some error but some service still gets upgraded.

Output of helm version:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.27", GitCommit:"145f9e21a4515947d6fb10819e5a336aff1b6959", GitTreeState:"clean", BuildDate:"2020-02-21T18:01:40Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE and Minkube

bug

Most helpful comment

FYI, the issue raised by the OP and the comments raised here about --force are separate, discrete issues. Let's try to focus on OP's issue here.

To clarify, the issue OP is describing is a potential regression @n1koo identified in https://github.com/helm/helm/issues/7956#issuecomment-620749552. That seems like a legitimate bug.

The other comments mentioning the removal of --force working for them is intentional and expected behaviour from Kubernetes' point of view. With --force, you are asking Helm to make a PUT request against Kubernetes. Effectively, you are asking Kubernetes to take your target manifests (the templates rendered in your chart from helm upgrade) as the source of truth and overwrite the resources in your cluster with the rendered manifests. This is identical to kubectl apply --overwrite.

In most cases, your templates don't specify a cluster IP, which means that helm upgrade --force is asking to remove (or change) the service's cluster IP. This is an illegal operation from Kubernetes' point of view.

This is also documented in #7082.

This is also why removing --force works: Helm makes a PATCH operation, diffing against the live state, merging in the cluster IP into the patched manifest, preserving the cluster IP over the upgrade.

If you want to forcefully remove and re-create the object like what was done in Helm 2, have a look at #7431.

Hope this clarifies things.

Moving forward, let's try to focus on OP's issue here.

All 64 comments

Not enough information has been provided to reproduce. Please tell us how to create a reproducible chart, and which Helm commands you used.

Hi, here are the reproduce steps
Having two services yaml file as below.

nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

prometheus.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: prometheus
spec:
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus
        name: prometheus
        ports:
        - containerPort: 9090
        imagePullPolicy: Always
      hostname: prometheus
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
spec:
  selector:
    app: prometheus
  clusterIP: None
  ports:
  - name: headless
    port: 9090
    targetPort: 0

Then put there two files in helm1/templates/ then install. It shows prometheus service uses clusterIP and nginx version is 1.14.2

# helm upgrade --install test helm1
Release "test" does not exist. Installing it now.
NAME: test
LAST DEPLOYED: Tue Apr 21 20:42:55 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP    35d
prometheus   ClusterIP   None         <none>        9090/TCP   7s

# kubectl describe deployment nginx |grep Image
    Image:        nginx:1.14.2

Now update the section for nginx.yaml to new version 1.16

        image: nginx:1.16

and prometheus.yaml by changing it to LoadBalancer.

spec:
  selector:
    app: prometheus
  ports:
  - name: "9090"
    port: 9090
    protocol: TCP
    targetPort: 9090
  type: LoadBalancer

Now put them as helm2 and do the upgrade. Then you can see the upgrade throw some errors, but the nginx service goes through, by upgrade to a new version, but the prometheus is not upgraded as it is still using Cluster IP.

# helm upgrade --install test helm2
Error: UPGRADE FAILED: cannot patch "prometheus" with kind Service: Service "prometheus" is invalid: spec.clusterIP: Invalid value: "": field is immutable

# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP    35d
prometheus   ClusterIP   None         <none>        9090/TCP   5m34s

# kubectl describe deployment nginx |grep Image
    Image:        nginx:1.16

helm list shows

# helm list
NAME    NAMESPACE   REVISION    UPDATED                                 STATUS  CHART                                       APP VERSION
test    default     2           2020-04-21 20:48:20.133644429 -0700 PDT failed  

helm history

# helm history test
REVISION    UPDATED                     STATUS      CHART       APP VERSION DESCRIPTION                                                                                                                                               
1           Tue Apr 21 20:42:55 2020    deployed    helm-helm   1.0.0.6     Install complete                                                                                                                                          
2           Tue Apr 21 20:48:20 2020    failed      helm-helm   1.0.0.6     Upgrade "test" failed: cannot patch "prometheus" with kind Service: Service "prometheus" is invalid: spec.clusterIP: Invalid value: "": field is immutable

We have the same behavior with v3.2.0, downgrading to v3.1.3 is our temporary fix

I've got a lot of this with my Helm 2 -> 3 migration. When trying to upgrade the converted Releases for the first time I get a lot of this. This is for Nginx Ingress, Prometheus Operator, Graylog and Jaeger charts so far. Most of them I'm content with just deleting the services and letting Helm recreate them but for Nginx Ingress this isn't an option...

Just found this https://github.com/helm/helm/issues/6378#issuecomment-557746499 which explains the problem in my case.

Closing as a duplicate of #6378. @cablespaghetti found the deeper explanation for this behaviour, which is described in great detail.

Let us know if that does not work for you.

@GaramNick why would downgrading fix this for you? Can you elaborate more on “what” was fixed by downgrading?

@bacongobbler While you're here. Is there any way to fix this situation without deleting the release and re-deploying? I can't seem a way to do that under helm 2 or 3. I want to hack the existing release data so Helm thinks the clusterIP has always been omitted and so no patch is necessary.

Have you tried kubectl edit?

We have the same issue and downgrading to 3.1.3 fixed it also for us. My guess is that it has to do with the new logic in https://github.com/helm/helm/pull/7649/commits/d829343c1514db17bee7a90624d06cdfbffde963 considering this a Create and not an update thus trying to set empty IP and not reusing the populated one

Interesting find. thank you for investigating.

@jlegrone any chance you might have time to look into this?

@bacongobbler Our CI/CD pipeline uses Helm to update our application that includes a Service with type ClusterIP. The command:

helm upgrade --install --force \
        --wait \
        --set image.repository="$CI_REGISTRY_IMAGE" \
        --set image.tag="$CI_COMMIT_REF_NAME-$CI_COMMIT_SHA" \
        --set image.pullPolicy=IfNotPresent \
        --namespace="$KUBE_NAMESPACE" \
        "$APP_NAME" \
        ./path/to/charts/

On v3.2.0 this command fails with Service "service-name" is invalid: spec.clusterIP: Invalid value: "": field is immutable

On v3.1.3 this works fine.

Let me know if you like to have more info.

Same here. We had the following service.yaml working fine with helm2 for many many months.
After migration, the helm 3.2 helm upgrade command failed with the save error as above. Downgrading to 3.1.3 resolved it.

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.global.name }}
  namespace: {{ index .Values.global.namespace .Values.global.env }}
  labels:
     microservice: {{ .Values.global.name }}
spec:
   type: ClusterIP
   ports:
   - port: 8080
   selector:
      microservice: {{ .Values.global.name }}

We have the same issue and downgrading to 3.1.3 fixed it also for us. My guess is that it has to do with the new logic in d829343 considering this a Create and not an update thus trying to set empty IP and not reusing the populated one

@n1koo Can you explain why you think this is the code causing the issue? As this is the install and not upgrade code, and also the code in 3.1 is a `create and it works.

I am reviewing the issue with @adamreese , and we _think_ it is the patch that @n1koo identified. The Create method will bypass the normal 3-way diff on the Service, which will result in the service's clusterIP being set to "" instead of the value populated by Kubernetes. As a result, the manifest sent to the API server _appears_ to be resetting the cluster IP, which is illegal on a service (and definitely not what the user intended).

We're still looking into this and I will update if we learn more.

So https://github.com/helm/helm/issues/6378#issuecomment-557746499 is correct. Please read that before continuing on with this issue. If clusterIP: "" is set, Kubernetes will assign an IP. On the next Helm upgrade, if clusterIP:"" again, it will give the error above, because it appears _to Kubernetes_ that you are trying to reset the IP. (Yes, Kubernetes modifies the spec: section of a service!)

When the Create method bypasses the 3-way diff, it sets clusterIP: "" instead of setting it to the IP address assigned by Kubernetes.

To reproduce:

$ helm create issue7956
$ # edit issue7956/templates/service.yaml and add `clusterIP: ""` under `spec:`
$ helm upgrade --install issue7956 issue7956
...
$ helm upgrade issue7956 issue7956
Error: UPGRADE FAILED: cannot patch "issue-issue7956" with kind Service: Service "issue-issue7956" is invalid: spec.clusterIP: Invalid value: "": field is immutable

The second time you run the upgrade, it will fail.

I cannot reproduce @IdanAdar 's case on master.

@GaramNick there is not enough info about the service you are using for us to reproduce your error.

My situation:
version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
also tested w/
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

given the following service template:

apiVersion: v1
kind: Service
metadata:
  name: {{ include "app.fullname" . }}
  labels:
    {{- include "app.labels" . | nindent 4 }}
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: {{ include "app.fullname" . }}_mapping
      prefix: /{{ include "app.fullname" . }}
      host: "^{{ include "app.fullname" . }}.*"
      host_regex: true
      service: {{ include "app.fullname" . }}.{{ .Release.Namespace }}
      rewrite: ""
      timeout_ms: 60000
      bypass_auth: true
      cors:
        origins: "*"
        methods: POST, GET, OPTIONS
        headers:
        - Content-Type
        - Authorization
        - x-client-id
        - x-client-secret
        - x-client-trace-id
        - x-flow-proto
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: {{ include "app.fullname" . }}_swagger_mapping
      ambassador_id: corp
      prefix: /swagger
      host: "^{{ include "app.fullname" . }}.corp.*"
      host_regex: true
      service: {{ include "app.fullname" . }}.{{ .Release.Namespace }}
      rewrite: ""
      bypass_auth: true
      cors:
        origins: "*"
        methods: POST, GET, OPTIONS
        headers:
        - Content-Type
        - x-client-id
        - x-client-secret
        - Authorization
        - x-flow-proto
  namespace: {{ .Release.Namespace }}
spec:
  type: {{ .Values.service.type }}
  selector:
    {{- include "app.selectorLabels" . | nindent 4 }}
  ports:
  - port: {{ .Values.service.port }}
    name: http-rest-hub
    targetPort: http-rest
  - port: {{ .Values.service.healthPort }}
    name: http-health
    targetPort : http-health

which results in the following after upgrade --install:

apiVersion: v1
kind: Service
metadata:
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: hub-alt-bor_mapping
      prefix: /hub-alt-bor
      host: "^hub-alt-bor.*"
      host_regex: true
      service: hub-alt-bor.brett
      rewrite: ""
      timeout_ms: 60000
      bypass_auth: true
      cors:
        origins: "*"
        methods: POST, GET, OPTIONS
        headers:
        - Content-Type
        - Authorization
        - x-client-id
        - x-client-secret
        - x-client-trace-id
        - x-flow-proto
      ---
      apiVersion: ambassador/v1
      kind: Mapping
      name: hub-alt-bor_swagger_mapping
      ambassador_id: corp
      prefix: /swagger
      host: "^hub-alt-bor.corp.*"
      host_regex: true
      service: hub-alt-bor.brett
      rewrite: ""
      bypass_auth: true
      cors:
        origins: "*"
        methods: POST, GET, OPTIONS
        headers:
        - Content-Type
        - x-client-id
        - x-client-secret
        - Authorization
        - x-flow-proto
    meta.helm.sh/release-name: alt-bor
    meta.helm.sh/release-namespace: brett
  creationTimestamp: ...
  labels:
    app: hub
    app.kubernetes.io/instance: alt-bor
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: hub
    app.kubernetes.io/version: v1.6.0-rc.26
    deploy.xevo.com/stackname: bor-v0.1-test
    helm.sh/chart: hub-0.0.4
    owner: gateway
    ownerSlack: TODOunknown
  name: hub-alt-bor
  namespace: brett
  resourceVersion: ...
  selfLink: ...
  uid: ...
spec:
  clusterIP: 172.20.147.13
  ports:
  - name: http-rest-hub
    port: 80
    protocol: TCP
    targetPort: http-rest
  - name: http-health
    port: 90
    protocol: TCP
    targetPort: http-health
  selector:
    app.kubernetes.io/instance: alt-bor
    app.kubernetes.io/name: hub
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

If I then upload this exact same chart as version 0.0.5 and upgrade --install again I get the following:
Error: UPGRADE FAILED: failed to replace object: Service "hub-alt-bor" is invalid: spec.clusterIP: Invalid value: "": field is immutable

The only difference is the value of the helm.sh/chart label which now has a value of hub-0.0.5

This is a huge blocker.

@GaramNick there is not enough info about the service you are using for us to reproduce your error.

@technosophos What do you need? Happy to provide more details!

Update! The update fails ONLY when using helm upgrade --install w/ --force. Less of a blocker now.

Oh! That is interesting. That should make the error easier to track down.

Hello @technosophos @bacongobbler we have the same 2 issues:

version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

  1. Issue
    We have Service template without clusterIP but kubernetes will assign clusterIP automatically:
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}
  labels:
    app: {{ .Values.image.name }}
    release: {{ .Release.Name }}
spec:
  type: ClusterIP
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.port }}
      protocol: TCP
      name: http
  selector:
    app: {{ .Values.image.name }}
    release: {{ .Release.Name }}

after migrate to helm 3 with helm 2to3 convert and try upgrade the same release helm3 upgrade --install --force:

failed to replace object: Service "dummy-stage" is invalid: spec.clusterIP: Invalid value: "": field is immutable

if i will do the same without --force -> helm3 upgrade --install works fine without error.

  1. Issue
    if I want change spec.selector.matchLabels in Deployment which are immutable field without --force I get error:
cannot patch "dummy-stage" with kind Deployment: Deployment.apps "dummy-stage" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"web-nerf-dummy-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

if I will do the same with --force I get error:

failed to replace object: Deployment.apps "dummy-stage" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"web-nerf-dummy-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Is it possible implement the same behaviour for --force as in helm 2 because we can without any error upgrade immutable filed ?

apiVersion: v1
kind: Service
metadata:
  name: zipkin-proxy
  namespace: monitoring
spec:
  ports:
  - port: 9411
    targetPort: 9411
  selector:
    app: zipkin-proxy
  type: ClusterIP

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: zipkin-proxy
  namespace: monitoring
spec:
  replicas: {{ .Values.zipkinProxy.replicaCount }}
  template:
    metadata:
      labels:
        app: zipkin-proxy
      annotations:
        prometheus.io/scrape: 'true'
    spec:
      containers:
      - image: {{ .Values.image.repository }}/zipkin-proxy
        name: zipkin-proxy
        env:
        - name: STORAGE_TYPE
          value: stackdriver

helm upgrade -i --debug --force --namespace monitoring zipkin-proxy --values ./values.yaml.tmp .

I have tried with removing the force option. I tried with v3.1.3, v3.2.0 as well as v3.2.1 still the same issue.

Stack trace

history.go:52: [debug] getting history for release zipkin-proxy
upgrade.go:84: [debug] preparing upgrade for zipkin-proxy
upgrade.go:92: [debug] performing update for zipkin-proxy
upgrade.go:234: [debug] creating upgraded release for zipkin-proxy
client.go:163: [debug] checking 2 resources for changes
client.go:195: [debug] error updating the resource "zipkin-proxy":
         cannot patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
client.go:403: [debug] Looks like there are no changes for Deployment "zipkin-proxy"
upgrade.go:293: [debug] warning: Upgrade "zipkin-proxy" failed: cannot patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error: UPGRADE FAILED: cannot patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.go:75: [debug] cannot patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.sh/helm/v3/pkg/kube.(*Client).Update
        /home/circleci/helm.sh/helm/pkg/kube/client.go:208
helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
        /home/circleci/helm.sh/helm/pkg/action/upgrade.go:248
helm.sh/helm/v3/pkg/action.(*Upgrade).Run
        /home/circleci/helm.sh/helm/pkg/action/upgrade.go:93
main.newUpgradeCmd.func1
        /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:137
github.com/spf13/cobra.(*Command).execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
UPGRADE FAILED
main.newUpgradeCmd.func1
        /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:139
github.com/spf13/cobra.(*Command).execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

I am having this issue when the Helm Chart version changes and having an existing deployment.

Using Helm v3.2.0

I can confirm that downgrading to 3.1.2 works.

@gor181 How can we reproduce that? What broke on 3.2 but worked on 3.1? The chart (or at least svc template) and commands are what we need to be able to figure out what changed.

@azarudeena @alexandrsemak -- for both of you, the --force flag is what is causing this. If you remove --force, does the upgrade work?

@technosophos did try without force didn't work. Planning to try with 3.1.2

@azarudeena can you please provide a set of instructions to reproduce your issue? You showed some output of a service and a deployment template, but then you also referenced a values.yaml.tmp which we don't know the output of, nor the Chart.yaml.

Can you please provide a sample chart we can use to reproduce your issue?

@bacongobbler I am sharing the structure.

Chart.yaml

apiVersion: v1
description: Deploys Stackdriver Trace Zipkin Proxy
name: zipkin-proxy
version: 1.0.0

I have put my template yaml above,

my value yaml.tmp is as below

zipkinProxy:
  replicaCount: 1

image:
  repository: openzipkin/zipkin

I package it as helm package --version using the same with upgrade. Let me know if this works? WIll update here once my I try with 3.1.2

Edit

Tried by downgrading to 3.1.2 and 3.1.1. Still not able to get this patched.

I faced the same issue, but with upgrading a helm chart via terraform helm provider.
After I had changed the force_update = true to force_update = false, the error went away.

I am having this issue when the Helm Chart version changes and having an existing deployment.

Using Helm v3.2.0

Disabling --force flag made it work.

@technosophos --force resolve issue with ClusterIP when you migrate to helm 3 as helm 2 don't try upgrade ClusterIP when helm 3 do.
Helm 3 not able resolve issue with immutable filed as matchLabels

Kubernetes modifies the spec: section of a service

Should this be considered a design flaw in Kubernetes at root? https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address makes no mention of this behavior. I would have expected an assigned value to be placed in the status section.

(A similar behavior exists for the .spec.nodeName of a Pod, but that is unlikely to affect Helm users.)

v3.2.3: it fails with --force, it passes w/o --force. No ClusterIP: in chart-template. Which I guess https://github.com/helm/helm/pull/8000/files was supposed to fix.

upgrade.go:121: [debug] preparing upgrade for eos-eve-srv-d1
upgrade.go:129: [debug] performing update for eos-eve-srv-d1
upgrade.go:308: [debug] creating upgraded release for eos-eve-srv-d1
client.go:173: [debug] checking 6 resources for changes
client.go:432: [debug] Replaced "eos-eve-srv-d1-fsnode" with kind ServiceAccount for kind ServiceAccount
client.go:432: [debug] Replaced "eos-eve-srv-d1-fsnode-imagepullsecret" with kind Secret for kind Secret
client.go:432: [debug] Replaced "eos-eve-srv-d1-fsnode-config" with kind ConfigMap for kind ConfigMap
client.go:205: [debug] error updating the resource "eos-eve-srv-d1-fsnode":
         failed to replace object: Service "eos-eve-srv-d1-fsnode" is invalid: spec.clusterIP: Invalid value: "": field is immutable
client.go:432: [debug] Replaced "eos-eve-srv-d1-fsnode" with kind Deployment for kind Deployment
client.go:432: [debug] Replaced "eos-eve-srv-d1-fsnode" with kind Ingress for kind Ingress
upgrade.go:367: [debug] warning: Upgrade "eos-eve-srv-d1" failed: failed to replace object: Service "eos-eve-srv-d1-fsnode" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error: UPGRADE FAILED: failed to replace object: Service "eos-eve-srv-d1-fsnode" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.go:84: [debug] failed to replace object: Service "eos-eve-srv-d1-fsnode" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.sh/helm/v3/pkg/kube.(*Client).Update
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/pkg/kube/client.go:218
helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/pkg/action/upgrade.go:322
helm.sh/helm/v3/pkg/action.(*Upgrade).Run
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/pkg/action/upgrade.go:130
main.newUpgradeCmd.func1
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/cmd/helm/upgrade.go:144
github.com/spf13/cobra.(*Command).execute
        /private/tmp/helm-20200608-50972-gq0j1j/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
        /private/tmp/helm-20200608-50972-gq0j1j/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
        /private/tmp/helm-20200608-50972-gq0j1j/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/cmd/helm/helm.go:83
runtime.main
        /usr/local/Cellar/[email protected]/1.13.12/libexec/src/runtime/proc.go:203
runtime.goexit
        /usr/local/Cellar/[email protected]/1.13.12/libexec/src/runtime/asm_amd64.s:1357
UPGRADE FAILED
main.newUpgradeCmd.func1
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/cmd/helm/upgrade.go:146
github.com/spf13/cobra.(*Command).execute
        /private/tmp/helm-20200608-50972-gq0j1j/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
        /private/tmp/helm-20200608-50972-gq0j1j/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
        /private/tmp/helm-20200608-50972-gq0j1j/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
        /private/tmp/helm-20200608-50972-gq0j1j/src/helm.sh/helm/cmd/helm/helm.go:83
runtime.main
        /usr/local/Cellar/[email protected]/1.13.12/libexec/src/runtime/proc.go:203
runtime.goexit
        /usr/local/Cellar/[email protected]/1.13.12/libexec/src/runtime/asm_amd64.s:1357

I am observing this issue on 3.2.3 but not on 3.2.0. Disabling force also was a usable workaround.

FYI, the issue raised by the OP and the comments raised here about --force are separate, discrete issues. Let's try to focus on OP's issue here.

To clarify, the issue OP is describing is a potential regression @n1koo identified in https://github.com/helm/helm/issues/7956#issuecomment-620749552. That seems like a legitimate bug.

The other comments mentioning the removal of --force working for them is intentional and expected behaviour from Kubernetes' point of view. With --force, you are asking Helm to make a PUT request against Kubernetes. Effectively, you are asking Kubernetes to take your target manifests (the templates rendered in your chart from helm upgrade) as the source of truth and overwrite the resources in your cluster with the rendered manifests. This is identical to kubectl apply --overwrite.

In most cases, your templates don't specify a cluster IP, which means that helm upgrade --force is asking to remove (or change) the service's cluster IP. This is an illegal operation from Kubernetes' point of view.

This is also documented in #7082.

This is also why removing --force works: Helm makes a PATCH operation, diffing against the live state, merging in the cluster IP into the patched manifest, preserving the cluster IP over the upgrade.

If you want to forcefully remove and re-create the object like what was done in Helm 2, have a look at #7431.

Hope this clarifies things.

Moving forward, let's try to focus on OP's issue here.

Are there any known workarounds? Faced the same issue when trying to upgrade https://github.com/helm/charts/tree/master/stable/rabbitmq-ha from 1.34.1 to 1.46.4. Obviously --force or helm downgrade to 3.1.3 didn't help, deleting the service in question and helm upgrade did help

@EvgeniGordeev This going to be crude solution which worked for me with small downtime. Uninstall the chart/reinstall.

We've hit this as well with the nginxinc ingress chart. We use --force generally.

Since this issue is still open, is there plans for some sort of fix to address this, or is this deemed working as designed (it is hard to discern from this + the other issues opened with the same behavior)? I read one explanation that this is an issue with the chart that itself and clusterIP: "" should not be used and instead the value should be completely omitted.

Is this something we should be chasing up with the chart developers?

@cdunford - the suggested fix it to stop using --force as was suggested https://github.com/helm/helm/issues/7956#issuecomment-643432099.

This PR could also address the issue: #7431 (aso suggested in that comment)...

We hit this issue for the N time, we are using --force flag as well in our pipeline.

original problem came along with helm2, so will it be also fixed in helm2? @bacongobbler

@bacongobbler why do you say providing "force" is different from the issue if stripping it OR downgrading helps?

I mean, I've just hit the issue with Helm 3.3.4, https://artifacthub.io/packages/helm/bitnami/nginx chart and no values changed. Tested on three different clouds: GCP/GKE, Azure/AKS and AWS/EKS, failed on all three.

Immediately worked after I have downgraded Helm to 3.1.3 AND also worked on 3.3.4 without "--force" flag.

I thought I made it fairly clear in my earlier comment that there are two separate, unique cases where a user can see this error. One is OP’s case. The other is from the use of --force. We are focusing on OP’s issue here.

Out of respect for the people who are experiencing the same issue as the OP, please stop hijacking this thread to talk about --force. We are trying to discuss how to resolve OP’s issue. If you want to talk about topics that are irrelevant to the issue that the OP described, please either open a new ticket or have a look at the suggestions I made earlier.

@tibetsam with regards to fixing this for Helm 2: no. We are no longer providing bug fixes for Helm 2. See https://helm.sh/blog/helm-v2-deprecation-timeline/ for more info.

I think I managed to reproduce OP's problem with the jupytherhub helm chart.
Hopefully, with the instructions below, you will manage to reproduce the issue:


Important
Jupyterhub helm chart does not contain a spec.clusterIP field in its Service specifications, as you can see (for example) here: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/c0a43af12a89d54bcd6dcb927fdcc2f623a14aca/jupyterhub/templates/hub/service.yaml#L17-L29


I am using helm and kind to reproduce the problem:

➜ helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

➜ kind version
kind v0.9.0 go1.15.2 linux/amd64

➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

How to reproduce

  1. Create new kind cluster
kind create cluster
  1. Create a file called config.yaml with following content (random generated hex):
proxy:
  secretToken: "3a4bbf7405dfe1096ea2eb9736c0df299299f94651fe0605cfb1c6c5700a6786"

FYI I am following the instructions for helm file installation (link)

  1. Add helm repository
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
  1. Install the chart (with --force option)
RELEASE=jhub
NAMESPACE=jhub

helm upgrade --cleanup-on-fail --force \
  --install $RELEASE jupyterhub/jupyterhub \
  --namespace $NAMESPACE \
  --create-namespace \
  --version=0.9.0 \
  --values config.yaml
  1. Repeat step 5

Error:

Error: UPGRADE FAILED: failed to replace object: PersistentVolumeClaim "hub-db-dir" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
        AccessModes:      []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
        Selector:         nil,
        Resources:        core.ResourceRequirements{Requests: core.ResourceList{s"storage": {i: resource.int64Amount{value: 1073741824}, s: "1Gi", Format: "BinarySI"}}},
-       VolumeName:       "",
+       VolumeName:       "pvc-c614de5c-4749-4755-bd3a-6e603605c44e",
-       StorageClassName: nil,
+       StorageClassName: &"standard",
        VolumeMode:       &"Filesystem",
        DataSource:       nil,
  }
 && failed to replace object: Service "hub" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "proxy-api" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "proxy-public" is invalid: spec.clusterIP: Invalid value: "": field is immutable

I'm in helm 3.3.4 and this is still an issue

Helm 2.14.1 issue present either w/w-out --force

Workaround: switch to type.spec: NodePort fixed my helmchart upgrade.

We have the same problem on v3.4.1 with --force flag.

@bacongobbler I know you have been trying vigilantly to keep on the OP's problem (separate from #6378) from being hijacked. I thought it might help those posting to review their error message to know if this thread is for them or not:

Is your error message "Error: UPGRADE FAILED: failed to _replace_..." and you _did use_ --force in your command? GOTO #6378

Is your error message "Error: UPGRADE FAILED: cannot _patch_..." and you _did not use_ --force in your command? Please post in this issue how you reproduced it.

@zpittmansf

helm3 upgrade concourse concourse/concourse -f temp.yaml  --force
Error: UPGRADE FAILED: failed to replace object: Service "concourse-web" is invalid: spec.clusterIP: Invalid value: "None": field is immutable && failed to replace object: Service "concourse-web-prometheus" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "concourse-web-worker-gateway" is invalid: spec.clusterIP: Invalid value: "": field is immutable

helm3 upgrade concourse concourse/concourse -f temp.yaml
Error: UPGRADE FAILED: cannot patch "concourse-web" with kind Service: Service "concourse-web" is invalid: spec.clusterIP: Invalid value: "None": field is immutable

I'm having the same issue on Helm 3.4.2. I run a helm-chart that creates a deployment, serviceaccount, and service. I add a label to my existing set of labels on my chart on the deployment, and now it refuses to upgrade:

helm upgrade test-whale charts/app-template/ --install --values values.yaml --namespace whale --force
Error: UPGRADE FAILED: failed to replace object: Service "test-whale" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Deployment.apps "test-whale-canary" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"test-whale", "app-id":"302040", "environment":"test", "version":"latest"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && failed to replace object: Deployment.apps "test-whale" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"test-whale", "app-id":"302040", "environment":"test", "version":"latest"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Basically, it seems like you cant ever add a label past the initial helm deploy.

It sounds terrible, but could Helm implement a list of "immutable fields" which would receive special treatment?

In this case, an "immutable field" would be the Service object's spec.clusterIP - Helm would consider it immutable and generate such an API request which would both a) not try to replace it, b) not try to remove it, c) not try to update it.

In practice, Helm would look for the current value of an immutable field and include that value in the API request's payload. As a result, the k8s API server would see Helm's request as "ok, they are not trying to modify this field".

The current situation is that Helm is very unreliable with especially Service resources because Helm assumes it holds the truth of a given resource. This is a false assumption that leads to the problem in this issue, since a resource may have received new properties server-side which Helm is unaware of. Hence, Helm should know which fields need special treatment in order to be a conforming k8s citizen.

PS. Also kubectl implements a lot of logic client-side, which allows kubectl to perform with these implicit requirements.

@jbilliau-rcd

try not using --force

@pre

I think there is something wack happening with the three way merge. Perhaps the last-applied annotation is being improperly recorded somehow.

I ended up figuring it out; apparently you can change labels on a Deployment and on the pod spec, but NOT on the match selector......Kubernetes does not like that. Which is strange to me; how else am I supposed to modify my deployment to only select pods on version "v2" during, say, a canary deployment? Currently, I have no way of doing that, so im confused on that part.

Upgrade to helm version 3.5.0 solved the issue.

Upgrade to helm version 3.5.0 solved the issue.

how exactly?

Upgrade to helm version 3.5.0 solved the issue.

Helm version 3.5.0 still not work.
But without --force is worked.

Hit this in helm 3.5.2

I tried to remove --force but still getting the same issue.

Upgrade "gateway" failed: failed to replace object: Service "ingress"
    is invalid: spec.clusterIPs[0]: Invalid value: []string(nil): primary clusterIP
    can not be unset

So far found a reasonable workaround - --reuse-values flag. Works for my case.

3.5.2 still got this issue, with/without --reuse-values

3.5.3 has this as well :/

Was this page helpful?
0 / 5 - 0 ratings