NGINX Ingress controller version: 0.32.0
Kubernetes version (use kubectl version): v1.17.4
Environment:
uname -a): 4.19.0What happened:
When ~hsts-preload is set to "true" in the ingress-nginx-controller~ a new configuration is applied to the ingress, the ingress is no longer reachable (http requests time out). A restart of the deployment (or daemon set in my case, see below) fixes the problem.
What you expected to happen:
~The Strict-Transport-Security-Header contains preload~ The configuration should be applied and the ingress should still be reachable.~
How to reproduce it:
Apply configuration from https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/deploy.yaml. ~Adapt the configmap to include hsts-preload: "true".~ Change the Deployment to a DaemonSet and set hostNetwork: true in the pod config. (I do not know if all of these are necessary to reproduce the problem.). Change the ingress configuration.
Anything else we need to know:
This happens only when the setting is changed and applied on a running ingress. ~The logs show no hint of the problem, but~ every http request fails with a request timeout. If you have further questions, feel free to ask, I am happy to test your suggestions.
/kind bug
Okay, some updates:
This is not limited to changing the hsts-preload setting but also happens in some other cases, for example when a new certificate shall be issued. That also triggers a backend reload of the ingress, resulting in numerous occurences of some of the following lines:
pthread_create() failed (11: Resource temporarily unavailable)
fork() failed while spawning "worker process" (11: Resource temporarily unavailable)
sendmsg() failed (9: Bad file descriptor)
worker process ... exited with fatal code 2 and cannot be respawned
I will further try to narrow down when or why this happens.
Manually reloading nginx (kubectl exec -it -n ingress-nginx ingress-nginx-controller-... -- nginx -c /etc/nginx/nginx.conf -s reload) fixes the problem until the next reload.
pthread_create() failed (11: Resource temporarily unavailable)
fork() failed while spawning "worker process" (11: Resource temporarily unavailable)
Closing. This error means the node where the pod is running is out of resources.
Cloud provider or hardware configuration: Bare-metal
If the node has more than 16 CPU cores I suggest you tune the default worker-processes directive to 4.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#worker-processes
Thank you, setting worker-processes to 4 worked like a charm!
Most helpful comment
Closing. This error means the node where the pod is running is out of resources.
If the node has more than 16 CPU cores I suggest you tune the default
worker-processesdirective to 4.https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#worker-processes