Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.3-eks", GitCommit:"58c199a59046dbf0a13a387d3491a39213be53df", GitTreeState:"clean", BuildDate:"2018-09-21T21:00:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Which chart:
stable/nginx-ingress
What happened:
Error: UPGRADE FAILED: Service "nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
What you expected to happen:
It doesn't break compatibility
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
With default config:
https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml#L163
This will add clusterIP anyway:
https://github.com/helm/charts/commit/e17c2f95a43c1fb7d2307707df96568bba497527#diff-b2f5526b8650ff34ad6c9084e65609e8R21
Can confirm this on Kubernetes 1.11.4 with Helm 2.11.0.
same here :(
I have the same problem.
I've the same issue. I can't upgrade or rollback revision.
Helm 2.11.0
Kubernetes 1.11.4
This also happened to me. The problem has been introduced with helm chart version 0.31.0. I tried upgrading using the previous chart version, 0.30.0, and had no trouble:
helm upgrade ingress stable/nginx-ingress --version 0.30.0 -f ingress-values.yaml
I am using helm 2.11.0, Kubernetes 1.10.6.
Same here
Same issue with latest 0.31
Using --version 0.30.0 allowed us to change critical configuration. Bug reproductions steps were mentioned here https://github.com/helm/charts/pull/7933#issuecomment-439713367
Ping @robermorales @jackzampolin @mgoodness @chancez, is this being worked on by someone? Are you aware of the issue?
Same here with chart 1.0.1 (k8s 1.12.3, helm 2.8.1)
We all have an agreement that we need to revert this change, correct?
(my personal opinion) The initial PR (#7933) may fix a real issue but it should not break things, unless it is unavoidable and so we tell people how to upgrade manually.
I also think that a revert could make people to upgrade manually, again.
In my opinion, the situation that leads to people needing to upgrade manually is caused by the original bug, not by the solution.
But you can proceed as you prefer.
to manually workaround and get back to working upgrade state (installed via helm chart prior to 0.30.0):
YOUR_INGRESS_CONTROLER_NS=<REPLACE_ME>
CLUSTER_IP=kubectl -n $YOUR_INGRESS_CONTROLER_NS get services nginx-ingress-controller -o jsonpath="{.spec.clusterIP}"
# add controller.service.clusterIP: "$CLUSTER_IP"
helmfile sync # or normal helm update
@peterromfeldhk Does this mean you now need to manage the clusterIP in the config values? I am looking for an upgrade path that does not require adding the IP to the config.
What happens if I do this workaround and then, when this is eventually fixed and I want to remove the cluster IP, won't I get stuck again but now with the reverse problem?
Effectively everyone using this chart since pre-November (3 months) is blocked from updating. Can a maintainer for this chart escalate this issue to either make an announcement that updates will be unsupported and no fix provided (closing this), or that the breaking change will be reverted? Thanks.
@razor-x yes sadly currently i still need the ClustIp in the config :( hope i can get rid of it soon, but i needed to upgrade to fix another issue
To update nginx I wrote this simple script that speeds up the process:
`
NAMESPACE=xxxx
RELEASE=yyyyy
VALUES=zzzzz.yaml
CLUSTER_IP=$(kubectl -n ${NAMESPACE} get services ${RELEASE}-controller -o jsonpath="{.spec.clusterIP}")
helm upgrade -f ${VALUES} ${RELEASE} stable/nginx-ingress --set controller.service.clusterIP=${CLUSTER_IP}
`
I don't know why github don't accept my "quoting" :(
I installed this from scratch just a couple days ago with brand new cluster / helm / chart and I'm now facing the exact same issue.
/assign @jackzampolin
@guizmaii: GitHub didn't allow me to assign the following users: jackzampolin.
Note that only helm members and repo collaborators can be assigned and that issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide
In response to this:
/assign @jackzampolin
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign @mgoodness
There are any news about this?
Just ran into this issue when testing a deployment.
Edit: Looks like it's fixed after a purge and re-install. So, this is still an issue for people with older charts in production, but it looks like PR #10993 did the job for those of us who haven't yet deployed.
getting this too -
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
As a possible compromise, can we add a config value flag that opts-in to the old behaviour? Something like
{{- if and .Values.controller.service.clusterIP AND (not .Values.controller.service.omitClusterIP) }}
Current users would not notice this flag (default omitClusterIP = false) , and users stuck unable to upgrade would be able to set omitClusterIP = true. Then when a non-breaking solution is found this flag can be removed.
Can't believe this issue is 4 months old now. Lots of users are stuck, can we get a patch in soon?
~We just got bitten by this.~
The upgrade was super easy with https://github.com/helm/charts/issues/9227#issuecomment-451058505
We may just bite the bullet and take the downtime by using helm update --force. When I tested on staging it amounted to around 20s of downtime for us. Has anyone else tried this? How much downtime did it cause?
not to diminish the importance of this bug, but we were able to upgrade without downtime making use of the workaround described above https://github.com/helm/charts/issues/9227#issuecomment-451058505 / https://github.com/helm/charts/issues/9227#issuecomment-457136396
Just came across this issue when try to upgrade 1.1.1 with the following values.yaml
rbac:
create: true
controller:
config:
use-proxy-protocol: "true"
Had to do a rollback but one of my services just kept getting a 503 so not sure why that got broken.
Has anyone had any success with the suggestion above. Really don't want to try again if it's going to break things.
I tried the suggestions above which allowed me to update but I could no longer access my sites. Had to rollback to the previous version.
Just ran into the same problem with Chart version nginx-ingress-1.4.0
I was just able to update from 0.30.0 => 1.4.0 without error.
Can't believe this issue is 4 months old now. Lots of users are stuck, can we get a patch in soon?
10000%
I am looking for an upgrade path that does not require adding the IP to the config.
10000%
I was just able to update from 0.30.0 => 1.4.0 without error.
I was not able to do this.
As a possible compromise, can we add a config value flag that opts-in to the old behaviour? Something like
Good idea @razor-x! I made a PR: https://github.com/helm/charts/pull/13646
I was also able to update from 0.30.0 to 1.6.0 since the ClusterIP stuff was reverted.
I was also able to update from 0.30.0 to 1.6.0 since the ClusterIP stuff was reverted.
It's not though:
https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml#L21
spec:
clusterIP: "{{ .Values.controller.service.clusterIP }}"
Yes it is, check history.
@larsha
I was also able to update from 0.30.0 to 1.6.0 since the ClusterIP stuff was reverted.
Yes it is, check history.
That link is to master. History is irrelevant. I just tested with latest release (currently 1.6.0):
helm diff upgrade --allow-unreleased ingress02 stable/nginx-ingress --version 1.6.0 --values <omitted> --kube-context <omitted>
default, ingress02-nginx-ingress-controller, DaemonSet (extensions) has changed:
# Source: nginx-ingress/templates/controller-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: nginx-ingress
- chart: nginx-ingress-1.0.1
+ chart: nginx-ingress-1.6.0
component: "controller"
heritage: Tiller
release: ingress02
name: ingress02-nginx-ingress-controller
<omitted>
default, ingress02-nginx-ingress-controller, Service (v1) has changed:
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
- chart: nginx-ingress-1.0.1
+ chart: nginx-ingress-1.6.0
component: "controller"
heritage: Tiller
release: ingress02
name: ingress02-nginx-ingress-controller
spec:
+ clusterIP: ""
ports:
<omitted>
helm upgrade --install --reset-values ingress02 stable/nginx-ingress --version 1.6.0 --values <omitted> --kube-context <omitted>
UPGRADE FAILED
ROLLING BACK
Error: Service "ingress02-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Error: UPGRADE FAILED: Service "ingress02-nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
err: exit status 1
Yes because you seem to be upgrading from version 1.0.1, which will have the changed behavior. The issue was introduced in 0.31.0 and reverted in 1.2.3.
I have always been talking about upgrading from a version below 0.31.0.
I have always been talking about upgrading from a version below 0.31.0.
Ahh OK, I understand 👍
For everyone else who installed the chart in that 2 year window (early 2017 - early 2019) and who wants to upgrade this chart will hit this issue. Hopefully my PR will be merged soon.
It’s not a two year window. The change was introduced November 9 and reverted January 30. So people who installed the chart with versions between 0.31.0 and 1.2.2 will face this issue as of today.
Earlier the issue was affecting more people, that’s why it was reverted.
So this has indeed been an issue for people with different versions trying to install different versions you can say 😁
This should be resolved with #13646.
Follow the instructions on the Readme for doing the upgrade: https://github.com/helm/charts/tree/master/stable/nginx-ingress#helm-error-when-upgrading-specclusterip-invalid-value-
How does 2:30 sound for him?
Thanks,
Shamil
On 22 May 2019, at 18:19, Alex Harder notifications@github.com wrote:
This should be resolved with #13646.
Follow the instructions on the Readme for doing the upgrade: https://github.com/helm/charts/tree/master/stable/nginx-ingress#helm-error-when-upgrading-specclusterip-invalid-value-—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
This is still broken, I just upgraded to 1.30.0 and I can't unset my clusterIP values.
I have done to fix this issue for my ingresses (had a lof of them, so I scripted it).
Because helm3 made this easier (no need for protobuf), I also took the chance to migrate from helm2 to helm3.
patch_nginx_release() {
set -v
local secret=$1
# patch ClusterIP
patched=$(kubectl get secrets $secret -o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d - | gsed 's# clusterIP: \\"\\"\\n##g' | gzip - | base64 | base64)
kubectl patch secret $secret -p "{\"data\":{\"release\":\"$patched\"}}" --type=merge
# patch rbac v1beta deprecation needed for 1.29.x
patched=$(kubectl get secrets $secret -o jsonpath='{.data.release}' | base64 -d | base64 -d | gzip -d - | gsed 's#rbac.authorization.k8s.io/v1beta1#rbac.authorization.k8s.io/v1#g' | gzip - | base64 | base64)
kubectl patch secret $secret -p "{\"data\":{\"release\":\"$patched\"}}" --type=merge
set +v
}
# Convert helm2 to helm3
kubectl get cm -l NAME=MYHELMRELEASE -o yaml -n kube-system > backup-MYHELMRELEASE-helm2.yaml
helm3 2to3 convert --delete-v2-releases MYHELMRELEASE
# get latest helm3 release via: kubectl get secrets
patch_nginx_release sh.helm.release.v1.MYHELMRELEASE.REVISION
# See that there should be no ClusterIP change
helm3 diff upgrade --namespace NAMESPACE -f values.yaml MYHELMRELEASE stable/nginx --version 1.29.6
# Update labels for helm3 and update from 1.25 to 1.29
helm3 upgrade --namespace NAMESPACE -f values.yaml MYHELMRELEASE stable/nginx --version 1.29.6
I used this like 10-20 times to upgrade from nginx-ingress 1.25.0 to 1.29.6 now.
(I do not give any guarantee that this code works for anyone, please test it before you want to apply it in another test cluster)
My solution to get my nginx-ingress up to date regarding to clusterIP and healthCheckNodePort problem was solved in the following script
Helm:
Version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"}
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-12T13:43:46Z", GoVersion:"go1.13.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.21", GitCommit:"11e3461149a8b3b6b172b136eaab0e8eea1ef3f4", GitTreeState:"clean", BuildDate:"2020-01-29T19:38:12Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
I also had a problem with new upgrading kubernetes that made me have to delete
deployment.extensions/nginx-ingress-default-backend & daemonset.extensions/nginx-ingress-controller
install_nginx () {
BASE_DIR=$(pwd)
HELM_BASE_DIR="${BASE_DIR}/helm/nginx-ingress"
NGINX_NAMESPACE="nginx-ingress"
NGINX_NAME="nginx-ingress"
APPVERSION=$(helm show chart stable/nginx-ingress | grep appVersion | awk '{print $2}')
echo "Install/Upgrade Nginx Ingress"
EXTRA_SETTINGS=""
# Get immutable settings from current deployment and add them to update (if any)
HEALTHCHECKNODEPORT=$(kubectl get service --namespace ${NGINX_NAMESPACE} -o=jsonpath='{.items[?(@.metadata.name=="nginx-ingress-controller")].spec.healthCheckNodePort}')
CONTROLLER_CLUSTERIP=$(kubectl get service --namespace ${NGINX_NAMESPACE} -o=jsonpath='{.items[?(@.metadata.name=="nginx-ingress-controller")].spec.clusterIP}')
if [[ ${CONTROLLER_CLUSTERIP} != "" ]]; then
EXTRA_SETTINGS="${EXTRA_SETTINGS} --set controller.service.clusterIP=${CONTROLLER_CLUSTERIP}"
fi
if [[ ${HEALTHCHECKNODEPORT} != "" ]]; then
EXTRA_SETTINGS="${EXTRA_SETTINGS} --set controller.service.healthCheckNodePort=${HEALTHCHECKNODEPORT}"
fi
DEFAULT_BACKEND_CLUSTERIP=$(kubectl get service --namespace ${NGINX_NAMESPACE} -o=jsonpath='{.items[?(@.metadata.name=="nginx-ingress-default-backend")].spec.clusterIP}')
if [[ ${CONTROLLER_CLUSTERIP} != "" ]]; then
EXTRA_SETTINGS="${EXTRA_SETTINGS} --set defaultBackend.service.clusterIP=${DEFAULT_BACKEND_CLUSTERIP}"
fi
# Namespace is hardcoded in this yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-${APPVERSION}/deploy/static/mandatory.yaml
helm upgrade --install ${NGINX_NAME} \
--namespace ${NGINX_NAMESPACE} \
-f ${HELM_BASE_DIR}/values.yaml \
${EXTRA_SETTINGS} \
stable/nginx-ingress
}
install_nginx
values.yaml
values.yaml
controller:
name: controller
kind: DaemonSet
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
service:
externalTrafficPolicy: "Local"
metrics:
enabled: false
hopes this helps anyone stumbling upon this issue
Most helpful comment
@peterromfeldhk Does this mean you now need to manage the clusterIP in the config values? I am looking for an upgrade path that does not require adding the IP to the config.
What happens if I do this workaround and then, when this is eventually fixed and I want to remove the cluster IP, won't I get stuck again but now with the reverse problem?
Effectively everyone using this chart since pre-November (3 months) is blocked from updating. Can a maintainer for this chart escalate this issue to either make an announcement that updates will be unsupported and no fix provided (closing this), or that the breaking change will be reverted? Thanks.