FEATURE REQUEST
NGINX Ingress controller version: latest
Kubernetes version (use kubectl version): 1.11.7
Environment:
uname -a): NAWhat happened: I added headers using the headers: section on the helm charts of nginx-ingress. Once helm deployed the new configMap with the headers, it never restarted nginx-ingress controller pods and did not update nginx.conf configs because only the configMap custom-headers changed.
What you expected to happen:
It would be nice if there was post-install hook that always restarts the nginx-ingress controller pods when changing a configMap as you expect the new headers to propagate and change the configuration in nginx.conf.
https://github.com/helm/helm/blob/master/docs/charts_hooks.md#the-available-hooks
How to reproduce it (as minimally and precisely as possible):
controller:
publishService:
enabled: true
# ConfigMap of the nginx-ingress:
config:
ssl-redirect: "true"
hsts: "true"
add-headers: kube-system/nginx-ingress-custom-headers
headers:
Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
X-XSS-Protection: "1; mode=block"
Referrer-Policy: "strict-origin-when-cross-origin"
X-Forwarded-Port: "443"
X-Forwarded-Proto: https
Now do a helm upgrade/install and passing in the above values.yaml.
Go back and add another header to the above section. Redeploy with helm. Helm does not restart the nginx-ingress controller pods so the headers do not get reloaded inside of nginx.conf in each nginx controller pod. :cry:
Anything else we need to know:
I had to manually restart the nginx-ingress controller pods in order to get the headers to be reloaded inside of nginx.conf.
@0verc1ocker this is not related to helm but an issue when the configmap changes. I will open a PR to fix this. Thanks for the report
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@0verc1ocker this is not related to helm but an issue when the configmap changes. I will open a PR to fix this. Thanks for the report