I use the following to install / upgrade a chart:
./helm upgrade --install
--set rbac.create=false
--set controller.replicaCount=2
--set controller.service.loadBalancerIP=$ip
--wait main-ingress stable/nginx-ingress
(Where $ip is an IP, e.g. 10.0.0.1)
That's done in a CI/CD pipeline, so the idea is to install the first time, upgrade next times.
It installs fine. At the second run, it outputs the following:
_client.go:339: Cannot patch Service: "main-ingress-nginx-ingress-controller" (Service "main-ingress-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:358: Use --force to force recreation of the resource
client.go:339: Cannot patch Service: "main-ingress-nginx-ingress-default-backend" (Service "main-ingress-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:358: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "main-ingress-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "main-ingress-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable_
I also get this on helm list:
NAME NAMESPACE REVISION UPDATED STATUS CHART
main-ingress default 1 2019-09-06 13:17:33.8463781 -0400 EDT deployed nginx-ingress-1.18.0
main-ingress default 2 2019-09-06 13:21:11.6428945 -0400 EDT failed nginx-ingress-1.18.0
So, the release has failed.
I didn't have that problem with Helm 2. Is it due to a change of behaviour in helm 3 or is it a bug? If it's the former, how could I change the command not to have that problem?
Output of helm version: version.BuildInfo{Version:"v3.0.0-beta.2", GitCommit:"26c7338408f8db593f93cd7c963ad56f67f662d4", GitTreeState:"clean", GoVersion:"go1.12.9"}
Output of kubectl version: Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:44:49Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS
This is likely related to a recent change in Helm 3 where it now uses a three-way merge patch strategy similar to kubectl. See #6124
If you can provide steps on how to reproduce this, that would be wonderful. Thanks!
Sure!
I created an AKS cluster.
I create a public IP in the MC_* resource group.
I stored the IP address of that public IP in $ip.
Then basically ran that command twice:
./helm upgrade --install
--set rbac.create=false
--set controller.replicaCount=2
--set controller.service.loadBalancerIP=$ip
--wait main-ingress stable/nginx-ingress
This is similar to what is done in https://docs.microsoft.com/en-us/azure/aks/ingress-static-ip.
The difference is that I do an helm upgrade --install twice. The purpose of that is to have a single command line (unconditional) in my CI/CD.
Let me know if you need more detail to reproduce.
Was that enough to reproduce? I can provide a bash script if it helps.
Sorry, off at Helm Summit EU for the week so I haven't had time to respond yet.
Ah... no worries. Enjoy the summit!
I'm also experiencing this issue
$ helm version --short
v3.0.0-beta.3+g5cb923e
nginx-ingress chart installs fine on the first run, however on an upgrade...
$ helm upgrade --install first-chart stable/nginx-ingress --namespace infra
client.go:357: Cannot patch Service: "first-chart-nginx-ingress-controller" (Service "first-chart-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
client.go:357: Cannot patch Service: "first-chart-nginx-ingress-default-backend" (Service "first-chart-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "first-chart-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "first-chart-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable
$ helm ls -n infra
NAME NAMESPACE REVISION UPDATED STATUS CHART
first-chart infra 1 2019-09-17 16:15:25.513997106 -0500 CDT deployed nginx-ingress-1.20.0
first-chart infra 2 2019-09-17 16:15:30.845249671 -0500 CDT failed nginx-ingress-1.20.0
I believe this is an issue with the nginx-ingress chart, not helm3. By default, the chart will always try to pass controller.service.clusterIP = "" and defaultBackend.service.clusterIP = "" unless you set controller.service.omitClusterIP=true and defaultBackend.service.omitClusterIP=true.
link to sources:
https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml#L321
https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml#L22
workaround:
$ helm upgrade --install ingress-test stable/nginx-ingress --set controller.service.omitClusterIP=true --set defaultBackend.service.omitClusterIP=true
I've tried doing it, but I'm still getting the same error
helm upgrade --install ingx stable/nginx-ingress -f ingx-values.yaml 1 ↵
client.go:357: Cannot patch Service: "ingx-nginx-ingress-controller" (Service "ingx-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
client.go:357: Cannot patch Service: "ingx-nginx-ingress-default-backend" (Service "ingx-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable)
client.go:376: Use --force to force recreation of the resource
Error: UPGRADE FAILED: Service "ingx-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable && Service "ingx-nginx-ingress-default-backend" is invalid: spec.clusterIP: Invalid value: "": field is immutable
ingx-values.yaml
rbac:
create: true
controller:
service:
externalTrafficPolicy: Local
omitClusterIP: true
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 100
targetCPUUtilizationPercentage: "70"
targetMemoryUtilizationPercentage: "70"
defaultBackend:
service:
omitClusterIP: true
As you can see below, template doesn't have clusterIP in it
helm template ingx stable/nginx-ingress -f ingx-values.yaml
---
# Source: nginx-ingress/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
---
# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress-backend
---
# Source: nginx-ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
---
# Source: nginx-ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingx-nginx-ingress
subjects:
- kind: ServiceAccount
name: ingx-nginx-ingress
namespace: default
---
# Source: nginx-ingress/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: nginx-ingress/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
heritage: Helm
release: ingx
name: ingx-nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingx-nginx-ingress
subjects:
- kind: ServiceAccount
name: ingx-nginx-ingress
namespace: default
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-controller
spec:
externalTrafficPolicy: "Local"
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: "controller"
release: ingx
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "default-backend"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-default-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: "default-backend"
release: ingx
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 10
strategy:
{}
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
component: "controller"
release: ingx
spec:
dnsPolicy: ClusterFirst
containers:
- name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1"
imagePullPolicy: "IfNotPresent"
args:
- /nginx-ingress-controller
- --default-backend-service=default/ingx-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/ingx-nginx-ingress-controller
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 33
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{}
hostNetwork: false
serviceAccountName: ingx-nginx-ingress
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/default-backend-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "default-backend"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-default-backend
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx-ingress
component: "default-backend"
release: ingx
spec:
containers:
- name: nginx-ingress-default-backend
image: "k8s.gcr.io/defaultbackend-amd64:1.5"
imagePullPolicy: "IfNotPresent"
args:
securityContext:
runAsUser: 65534
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{}
serviceAccountName: ingx-nginx-ingress-backend
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/controller-hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Helm
release: ingx
name: ingx-nginx-ingress-controller
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: ingx-nginx-ingress-controller
minReplicas: 2
maxReplicas: 100
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 70
- type: Resource
resource:
name: memory
targetAverageUtilization: 70
I suspect that it happened because I deployed it originally without omitClusterIP parameters, and helm v3 is trying to do 3-way merge with original manifest, which does have clusterIP: "" in it
helm get manifest ingx --revision 1 | grep "clusterIP"
clusterIP: ""
clusterIP: ""
I was able to fix it by deleting existing chart first, and re-creating it with omitClusterIP options. Bottom line, suggested workaround from @bambash will work only if you install chart with those options set to true from the start
$ helm upgrade --install ingress-test stable/nginx-ingress --set controller.service.omitClusterIP=true --set defaultBackend.service.omitClusterIP=true
It would be great if there was a way in helm v3 to skip the merge with existing manifest
sorry, I should have specified that these values need to be set when the release is initially installed. Updating an existing release may prove trickier...
I'm running into this problem with metric-server-2.8.8, which doesn't have any clusterIP in its values, and some other charts, with helm v3.0.0-rc.2. any advice? I'm not sure how to proceed.
My issue seems to be with helmfile v0.95.0. I'll pursue it there :)
I have the same Problem, even without setting the service type or clusterIP with helm v3.0.0-rc.2 if i use the --force option with the helm update --install command. Without the --force it works fine
@johannges, I was just about to post the same. :+1:
setting omitClusterIP: true seems to work for defaultBackend and controller services but not the metrics one.
I think this is issue with helm with --force option during upgrade.
Helm is trying to recreate service but it also replace spec.clusterIP so it throw error.
I can confirm this using my own custom chart.
Error: UPGRADE FAILED: failed to replace object: Service "litespeed" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Actually it was a mistake on my end, omitting the clusterIP definition on the initialisation of the service (or the chart) works fine 👍
I was encountering this error as well for existing deployed _kafka_ and _redis_ chart releases. Removing --force did indeed resolve this.
Now I'm getting a new error from the _redis_ release:
Error: UPGRADE FAILED: release redis failed, and has been rolled back due to atomic being set: cannot patch "redis-master" with kind StatefulSet: StatefulSet.apps "redis-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
Agreed with @bacongobbler that this looks related to the Helm v3 three-way merge patch strategy that's likely resulting in passing in fields (even with the same values as before) to the update/patch that Kubernetes considers immutable/unchangeable after first creation.
In case anyone ends up here using helm v3 via terraform, since you cant directly tell it not to use --force i had success manually deleting the chart using helm delete then re-running terraform. this sucks but it does work.
edit: the whole error: ( "nginx-ingress-singleton-controller" is the release name i set. it has no specific meaning )
Error: cannot patch "nginx-ingress-singleton-controller" with kind Service: Service "nginx-ingress-singleton-controller" is invalid: spec.clusterIP: Invalid value:
"": field is immutable && cannot patch "nginx-ingress-singleton-default-backend" with kind Service: Service "nginx-ingress-singleton-default-backend" is invalid: sp
ec.clusterIP: Invalid value: "": field is immutable
on .terraform/modules/app_dev/nginx-ingress.tf line 1, in resource "helm_release" "nginx_ingress":
1: resource "helm_release" "nginx_ingress" {
@zen4ever nailed the issue in https://github.com/helm/helm/issues/6378#issuecomment-532766512. I'll try to explain it in more detail....
As others have pointed out, the issue arises when a chart defines a clusterIP with an empty string. When the Service is installed, Kubernetes populates this field with the clusterIP it assigned to the Service.
When helm upgrade is invoked, the chart asked for the clusterIP to be removed, hence why the error message is spec.clusterIP: Invalid value: "": field is immutable.
This happens because of the following behaviour:
clusterIP to be an empty stringclusterIP. We'll use 172.17.0.1 for this examplehelm upgrade, the chart wants the clusterIP to be an empty string (or in @zen4ever's case above, it is omitted)When generating the three-way patch, it sees that the old state was "", live state is currently at "172.17.0.1", and proposed state is "". Helm detected that the user requested to change the clusterIP from "172.17.0.1" to "", so it supplied a patch.
In Helm 2, it ignored the live state, so it saw no change (old state: clusterIP: "" to new state: clusterIP: ""), and no patch was generated, bypassing this behaviour.
My recommendation would be to change the template output. If no clusterIP is being provided as a value, then don't set the value to an empty string... Omit the field entirely.
e.g. in the case of stable/nginx-ingress:
spec:
{{- if not .Values.controller.service.omitClusterIP }}
clusterIP: "{{ .Values.controller.service.clusterIP }}"
{{- end }}
Should be changed to:
spec:
{{- if not .Values.controller.service.omitClusterIP }}
{{ with .Values.controller.service.clusterIP }}clusterIP: {{ quote . }}{{ end }}
{{- end }}
This is also why --set controller.service.omitClusterIP=true works in this case.
TL;DR don't do this in your Service templates:
clusterIP: ""
Otherwise, Helm will try to change the service's clusterIP from an auto-generated IP address to the empty string, hence the error message.
Hope this helps!
As a temporary solution if you're trying to get this to work for now while this issue gets resolved I found if I did the following I was able to perform an update:
kubectl get svc | grep ingress
controller:
service:
clusterIP: <cluster-ip-address-for-controller>
defaultBackend:
service:
clusterIP: <cluster-ip-address-for-default-backend>
I've tested this for a cluster I'm running and it didn't require any recreation.
That works too. Good call @treacher. setting the same value via --set or in your values file generates no patch, as the upgrade doesn't want to change the value of the clusterIP.
Closing as working intentionally as per the three-way merge patch behaviour described above. Action item is for these charts to follow the recommendation provided above in https://github.com/helm/helm/issues/6378#issuecomment-557746499. Nothing to do here on Helm's end. :)
https://github.com/helm/charts/pull/19146/files created! Thanks @bacongobbler
@zen4ever nailed the issue in #6378 (comment). I'll try to explain it in more detail....
As others have pointed out, the issue arises when a chart defines a clusterIP with an empty string. When the Service is installed, Kubernetes populates this field with the clusterIP it assigned to the Service.
When
helm upgradeis invoked, the chart asked for theclusterIPto be removed, hence why the error message isspec.clusterIP: Invalid value: "": field is immutable.This happens because of the following behaviour:
- On install, the chart specified it wanted the
clusterIPto be an empty string- Kubernetes auto-assigned the Service a
clusterIP. We'll use172.17.0.1for this example- On
helm upgrade, the chart wants theclusterIPto be an empty string (or in @zen4ever's case above, it is omitted)When generating the three-way patch, it sees that the old state was
"", live state is currently at"172.17.0.1", and proposed state is"". Helm detected that the user requested to change theclusterIPfrom "172.17.0.1" to "", so it supplied a patch.In Helm 2, it ignored the live state, so it saw no change (old state:
clusterIP: ""to new state:clusterIP: ""), and no patch was generated, bypassing this behaviour.My recommendation would be to change the template output. If no
clusterIPis being provided as a value, then don't set the value to an empty string... Omit the field entirely.e.g. in the case of
stable/nginx-ingress:spec: {{- if not .Values.controller.service.omitClusterIP }} clusterIP: "{{ .Values.controller.service.clusterIP }}" {{- end }}Should be changed to:
spec: {{- if not .Values.controller.service.omitClusterIP }} {{ with .Values.controller.service.clusterIP }}clusterIP: {{ quote . }}{{ end }} {{- end }}Hi @bacongobbler, I think since if no value is provided we will still wind up with
clusterIP: ""... better would be valueclusterIP: ""completely commented out in the values file. This omits it from rendered manifests when set and should save future headaches. However if using helm3 and current helm state hasclusterIP: ""set, one needs to hardcode the clusterIP addresses in values files.
This is also why
--set controller.service.omitClusterIP=trueworks in this case.TL;DR don't do this in your Service templates:
clusterIP: ""Otherwise, Helm will try to change the service's clusterIP from an auto-generated IP address to the empty string, hence the error message.
Hope this helps!
Hey @bacongobbler , we faced with the same issue during migrate helm v2 release to helm v3. We use type: ClusterIP in Service but omit ClusterIP at all and we get:
Error: UPGRADE FAILED: failed to replace object: Service "dummy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
We don't have spec.clusterIP: in our helm template but we got this Error after migrate release via helm 2to3
Service template:
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Values.image.name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "-" }}
cluster: {{ default "unknown" .Values.cluster }}
region: {{ default "unknown" .Values.region }}
datacenter: {{ default "unknown" .Values.datacenter }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: ClusterIP
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
protocol: TCP
name: http
selector:
app: {{ .Values.image.name }}
release: {{ .Release.Name }}
Same problem here. The thing is that we haven't touched the Service. It's Ingress that was changed before the upgrade.
It is affect on resources with immutable field if delete --force flag from helm upgrade --install and don't touch immutable field everything works fine. But if you want bumping apiversion of resources??? You need recreate resources but helm 3 won't upgrade resources....
@bacongobbler ^^^
tried update hpa with new apiVersion by helm 3:
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: HorizontalPodAutoscaler, namespace: stage, name: dummy-stage
@bacongobbler and @kritcher722 thumbs downed comment has been updated if you so wish to remove the thumbs down, however if still in disagreement kindly elaborate as to why it is a good idea to have clusterIP: "" in the rendered manifests.
Looks like Microsoft is mentor of project. I see style. :)
Please reopen. Issue is not fixed. This "hack" suggested by nasseemkullah is not appropriate. Don't ask people to jump on heads. Just fix it. Very poor migration path. Helm sucks.
@antonakv what a way to start the year :)
I think in general we are playing with fire when providing clusterIP as a configurable value in a chart, and cannot totally blame one tool/person/PR in particular.
If clusterIP needs to be a configurable value, by default it should not be in the rendered template, that's the idea of my commenting out in the values files as per https://github.com/helm/charts/blob/270172836fd8cf56d787cf7d04d938856de0c794/stable/nginx-ingress/values.yaml#L236
This, if I'm not mistaken, should prevent anby future headaches for those that install the chart as of that change. But for those of us (myself included) who had installed it prior, and then migrated to helm3, I'm afraid hardcording the current clusterIP values in our values files OR uninstalling and reinstalling the chart (causes downtime!) are the only options I see.
Opinions are my own, I am not paid to work on helm, just an end user like you. Those who are paid to work on this full time may be able to provide more insight.
Happy new year and good luck! Don't give up on helm, together we can make it better.
Hey @bacongobbler , we faced with the same issue during migrate helm v2 release to helm v3. We use
type: ClusterIPin Service but omitClusterIPat all and we get:
Error: UPGRADE FAILED: failed to replace object: Service "dummy" is invalid: spec.clusterIP: Invalid value: "": field is immutableWe don't have
spec.clusterIP:in our helm template but we got this Error after migrate release via helm 2to3Service template:
apiVersion: v1 kind: Service metadata: name: {{ .Release.Name }} labels: app: {{ .Values.image.name }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "-" }} cluster: {{ default "unknown" .Values.cluster }} region: {{ default "unknown" .Values.region }} datacenter: {{ default "unknown" .Values.datacenter }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: type: ClusterIP ports: - port: {{ .Values.service.port }} targetPort: {{ .Values.service.port }} protocol: TCP name: http selector: app: {{ .Values.image.name }} release: {{ .Release.Name }}
We have the same problem. We don't define clusterIP at all in our chart and it is not present in the final template. However, we still get the same error and only with --force flag.
We're running into the same issue:
apiVersion: v1
kind: Service
{{ include "mde.metadata" $ }}
spec:
ports:
- name: {{ include "mde.portName" $ | quote }}
port: {{ include "mde.port" $ }}
protocol: TCP
targetPort: {{ include "mde.port" $ }}
selector:
app: {{ include "mde.name" $ }}
sessionAffinity: None
type: ClusterIP
spec.clusterIP is not part of the Service template, yet with Helm 3.0.2 and a helm upgrade ... --force --install call, we're also seeing:
Error: UPGRADE FAILED: failed to replace object: Service "dummy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Please re-open.
@tomcruise81 please see https://github.com/helm/helm/issues/7350 for the thread on --force. That results in the same error, but it is due to how kubectl replace appears to work. It is a separate issue than what's described here, which pertains to Service clusterIPs and the three-way merge patch strategy (helm upgrade without the --force flag).
@bacongobbler - thanks for the quick response and clarification. Looking at:
https://github.com/helm/helm/blob/a963736f6675e972448bf7a5fd141628fd0ae4df/pkg/kube/client.go#L405-L411
which make use of https://github.com/kubernetes/cli-runtime/blob/master/pkg/resource/helper.go#L155-L181, it doesn't appear that the call to helper.Replace does the same thing as kubectl replace -f ... --force (note the --force at the end).
I'm guessing that this is where a lot of the confusion is.
I know my expectation of helm upgrade ... --force and it's using a replacement strategy was that it'd do the same thing as kubectl replace -f ... --force.
Hey @bacongobbler , we faced with the same issue during migrate helm v2 release to helm v3. We use
type: ClusterIPin Service but omitClusterIPat all and we get:
Error: UPGRADE FAILED: failed to replace object: Service "dummy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
We don't havespec.clusterIP:in our helm template but we got this Error after migrate release via helm 2to3
Service template:apiVersion: v1 kind: Service metadata: name: {{ .Release.Name }} labels: app: {{ .Values.image.name }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "-" }} cluster: {{ default "unknown" .Values.cluster }} region: {{ default "unknown" .Values.region }} datacenter: {{ default "unknown" .Values.datacenter }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: type: ClusterIP ports: - port: {{ .Values.service.port }} targetPort: {{ .Values.service.port }} protocol: TCP name: http selector: app: {{ .Values.image.name }} release: {{ .Release.Name }}We have the same problem. We don't define
clusterIPat all in our chart and it is not present in the final template. However, we still get the same error and only with--forceflag.
I also checked, that there is no clusterIP in the release manifest:
$ helm get manifest paywall-api-ee | grep clusterIP
$
same here- we dont define ClusterIP anywhere but still see the error
Messing around with this some more, I've observed that:
helm upgrade ... --force --install - results in _The Service "dummy" is invalid:spec.clusterIP: Invalid value: "": field is immutable_helm template ... | kubectl apply -f - - workshelm template ... | kubectl replace -f - - results in _The Service "dummy" is invalid:spec.clusterIP: Invalid value: "": field is immutable_helm template ... | kubectl replace --force -f - - workskubectl version - 1.14.6
helm version - 3.0.2
@tomcruise81 you can try use helm plugin 2to3 and migrate from helm2 to helm3 release and delete --force if you previously use it.
It is work for us.
As for me and looks like for another guys --force has missbehaviour and should handling this case with immutable field as for me
@alexandrsemak - thanks for the recommendation. In my instance, I'm seeing this on a chart that has only been installed or upgraded using helm 3.
Same issue for me! Using
$ helm install my-release xxxxx
$ helm upgrade --install --force my-release xxxxx
In my case, I'm not defining ClusterIP on any of the services used on my chart, but I face the same issue (see the specs below):
spec:
type: {{ .Values.service.type }}
{{- if and (eq .Values.service.type "LoadBalancer") (not (empty .Values.service.loadBalancerIP)) }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
- name: htttp-XXX
port: {{ .Values.service.port }}
targetPort: XXX
{{- if and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) (not (empty .Values.service.nodePort)) }}
nodePort: {{ .Values.service.nodePort }}
{{- else if eq .Values.service.type "ClusterIP" }}
nodePort: null
{{- end }}
selector:
app.kubernetes.io/name: XXX
app.kubernetes.io/instance: {{ .Release.Name }}
As other users said before, the reason is that Kubernetes auto-assigns the Service a clusterIP the 1st time (e.g . clusterIP: 10.96.26.65) and it conflicts with clusterIP: "" when you try to upgrade. Please note that I'm not generating this on my templates: clusterIP: ""
Please reopen this @bacongobbler
I have the same issue.
@juan131 @Ronsevet: remove --force The meaning changed.
facing the same issue, on custom charts.
We dont define clusterip anywhere.
Helm v3.0.2
kubectl 1.14.8
The issue is, sometimes a chart remains in failed state even though the pods are created and running. If we try to upgrade the same release it doesnt work without force.
Since the pods are running the release cannot be deleted and re created.
There has to some way to use "force"
same for me - I just added additional label to service and faced this error. Also I don't define ClusterIP anywhere - please reopen the issue
@bacongobbler We are deploying StorageClasses as part of our chart and StorageClass's parameters are immutable. So in next release, when we update value of some StorageClass parameter, then also helm upgrade --force is failing.
Not sure how to handle this case for StorageClasses updation. Any suggestions?
Error: UPGRADE FAILED: failed to replace object: StorageClass.storage.k8s.io "ibmc-s3fs-standard-cross-region" is invalid: parameters: Forbidden: updates to parameters are forbidden.
It was working fine in helm v2 as helm upgrade --force used to force delete and recreate StorageClass.
If anyone is experiencing symptoms that are not a result of the explanation provided in https://github.com/helm/helm/issues/6378#issuecomment-557746499, can you please open a new issue with your findings and how we can reproduce it on our end?
The issue raised by the OP was because of the scenario provided above, where a chart set the ClusterIP to an empty string on install. It is entirely possible that there are other scenarios where this particular case can crop up, such as others have mentioned with the use of the --force flag. Those cases should be discussed separately, as the diagnosis and solution may differ than the advice provided earlier.
Thank you!
@mssachan see #7082 and the draft proposal in #7431 for your use case. That proposal aims to implement kubectl replace —force, which would be similar to Helm 2’s helm install —force‘s behaviour.
@mssachan see #7082 and the draft proposal in #7431 for your use case. That proposal aims to implement
kubectl replace —force, which would be similar to Helm 2’shelm install —force‘s behaviour.
It's good that this is happening. Even with omitting the --force flag, I still get the error when upgrading charts. For example, with cert-manager:
2020-03-05 12:15:19 CRITICAL: Command returned [ 1 ] exit code and error message [ Error: UPGRADE FAILED: cannot patch "cert-manager-cainjector" with kind Deployment: Deployment.apps "cert-manager-cainjector" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cainjector", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"cainjector"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "cert-manager" with kind Deployment: Deployment.apps "cert-manager" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cert-manager", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"cert-manager"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "cert-manager-webhook" with kind Deployment: Deployment.apps "cert-manager-webhook" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"webhook", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"webhook"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
@mssachan see #7082 and the draft proposal in #7431 for your use case. That proposal aims to implement
kubectl replace —force, which would be similar to Helm 2’shelm install —force‘s behaviour.It's good that this is happening. Even with omitting the
--forceflag, I still get the error when upgrading charts. For example, withcert-manager:
2020-03-05 12:15:19 CRITICAL: Command returned [ 1 ] exit code and error message [ Error: UPGRADE FAILED: cannot patch "cert-manager-cainjector" with kind Deployment: Deployment.apps "cert-manager-cainjector" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cainjector", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"cainjector"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "cert-manager" with kind Deployment: Deployment.apps "cert-manager" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cert-manager", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"cert-manager"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "cert-manager-webhook" with kind Deployment: Deployment.apps "cert-manager-webhook" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"webhook", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"webhook"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
@sc250024 I have the exactly same issue after I upgraded helm v2 to v3. The upgrade progress was smooth and no error, then I try to upgrade the cert-manager from helm, failed with same output.
# helm upgrade cert-manager jetstack/cert-manager --namespace cert-manager --atomic --cleanup-on-fail
# helm version
version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"}
Any workarounds for when --force is not used, or no option whatsoever around clusterIP being set. This is my Service manifest:
apiVersion: v1
kind: Service
metadata:
name: "{{ .Values.deploymentBaseName }}-{{ .Values.skaffoldUser }}"
labels:
name: "{{ .Values.deploymentBaseName }}-{{ .Values.skaffoldUser }}"
spec:
ports:
- port: {{ .Values.servicePort }}
targetPort: {{ .Values.containerPort }}
protocol: TCP
name: http
- name: debugger-http
port: {{ .Values.debuggerPort }}
targetPort: {{ .Values.debuggerPort }}
protocol: TCP
selector:
app: "{{ .Values.deploymentBaseName }}-{{ .Values.skaffoldUser }}"
type: ClusterIP
@davidfernandezm did you ever find a solution for this? I'm seeing the same on my end and my services are defined exactly as yours are. No option for clusterIP is being set, and yet Helm still fails on an upgrade.
Same here
Getting this also, re: the above two comments.
Please provide more information. We cannot help you without understanding the cause or how this issue crops up in your case. Thanks.
I have the same Problem, even without setting the service type or clusterIP with helm v3.0.0-rc.2 if i use the --force option with the helm update --install command. Without the --force it works fine
Cool! I inspired from your answer, that i have to comment force: .. line in helmfile yaml :
helmDefaults:
tillerless: true
verify: false
wait: true
timeout: 600
# force: true <---- THI ONE IS COMMENTED
It works 🎉
I tried all of the above, none worked for me. I had to disable nginx-ingress from my chart, do an upgrade, enable it again, and upgrade again. This led to a change in the IP address assigned by the cloud provider, but no harm done.
I have the same Problem, even without setting the service type or clusterIP with helm v3.0.0-rc.2 if i use the --force option with the helm update --install command. Without the --force it works fine
Best solution, it works for me, thank you!
We are having the same issue and could not find any work around it.
It is pretty simple to reproduce
helm install in stable/inbucket
helm upgrade in stable/inbucket
Error: UPGRADE FAILED: cannot patch "in-inbucket" with kind Service: Service "in-inbucket" is invalid: spec.clusterIP: Invalid value: "": field is immutable
I was wondering why the --force does not work here, isn't it supposed to force resource updates through a replacement strategy if this is a replacement strategy then the service should be removed then replaced ?
@bacongobbler I've got to this thread after checking https://github.com/helm/helm/issues/7956
As with all previous commenters: we don't have "clusterIP" in templates at all, but error is still present with latest Helm if --force flag is used.
Helm version: 3.4.1
"helm -n kube-system get manifest CHART_NAME | grep clusterIP" shows no results.
Error:
field is immutable && failed to replace object: Service "SERVICE_NAME" is invalid: spec.clusterIP: Invalid value: "": field is immutable
The same explanation provided in https://github.com/helm/helm/issues/6378#issuecomment-557746499 also applies here in your case @nick4fake. The difference being that with --force, you are asking Kubernetes to take your fully-rendered manifest and forcefully overwrite the current live object. Since your manifest does not contain a clusterIP field, Kubernetes takes that and assumes you are trying to remove the clusterIP field from the live object, hence the error Invalid value: "": field is immutable
.
@bacongobbler I am really sorry if I miss something here, maybe I simply don't know enough about Helm internals.
"My recommendation would be to change the template output. If no clusterIP is being provided as a value, then don't set the value to an empty string... Omit the field entirely."
So what is the solution? Does that mean that "--force" flag can't be used at all if clusterIP field is not set to some static value?
As far as Kubernetes is concerned: yes.
According to my understanding this a problem with Kubernetes, because "forcefully overwriting" does not behave the same way as "deleting and recreating again". Is there any upstream bug?
On the other hand, Helm is also misleading, because --force is described as "force resource updates through a replacement strategy". While in reality it does not do any replacement, it just attempts to forcefully overwrite resources (it would be better to name the flag --force-overwrite). Forceful replacement would look like deleting and recreating again (there could be a flag --force-recreate). Of course, --force-recreate could be a bit dangerous to use for some resources, but it would always succeed.
Anyway, Helm could implement a fallback workaround for such type of issues. If the current behavior (described as --force-overwrite) fails and detects an immutable field error, it should delete and recreate the resource (as --force-recreate).
Most helpful comment
I have the same Problem, even without setting the service type or clusterIP with helm v3.0.0-rc.2 if i use the --force option with the helm update --install command. Without the --force it works fine