NGINX Ingress controller version: 0.12.0 and 0.14.0
Kubernetes version (use kubectl version): v1.8.0
Environment: aws with elb
uname -a): 4.4.0-119-generic x86_64 GNU/LinuxWhat happened: Ingress controller dropping websocket connections when performing backend reload
What you expected to happen: Websockets should be left connected to the target server
How to reproduce it (as minimally and precisely as possible):
Load a simple websocket service into k8s and create ingress rule for it
open a websocket client and connect (for example - https://websocket.org/echo.html )
Cause ingress to perform reload backend (for example add or delete an ingress rule)
Anything else we need to know:
I verified this with 0.12 and 0.14 versions .
To my knowledge this is how Nginx behaves on reloads. That being said you can enable dynamic mode using --enable-dynamic-configuration to avoid reloads on backend changes.
Or use the worker-shutdown-timeout directive in your configuration config map
Thx for the replay @JordanP and @ElvinEfendi ,
using --enable-dynamic-configuration still causes less offten but still happening
to our understanding websockets should be persistent connections handling to the backend
are we wrong to use nginx to proxy these connections ?
can we do somthing like nginx -s reload to keep the connections alive ?
still causes less offten but still happening
Ok, can you tell us what are you chaning? Maybe you are deploying a new version of your app?
to our understanding websockets should be persistent connections handling to the backend are we wrong to use nginx to proxy these connections ?
This is not an issue with nginx in particular, you will face the same issue with other load balancers. The issue here is that you need to drain the connections before replacing the old pods (once kubernetes removes the pod from ready we cannot do anything about it)
Please check https://github.com/kubernetes/ingress-nginx/issues/322#issuecomment-298016539
Also please adjust the value of worker-shutdown-timeout
can we do somthing like nginx -s reload to keep the connections alive ?
That is what happens now.
@aledbf actually because the ingress controller is a cross-cluster service (lots of pods from diffrent namespaces are reversed from it ) each time a change event is triggered by it causes a reload and there for disconnects our WebSocket.
for example, we have our websocket pod working and clients connected to it ,
then we deploy a new http service pod with a new ingress rule in a different namespace causing the nginx controller to reload and closing the WebSocket's pod connections.
so its kind of unwanted behavior
You may want to read http://danielfm.me/posts/painless-nginx-ingress.html and especially the "Ingress Classes To The Rescue" section.
For future reference ... setting worker-shutdown-timeout long enough ( 24hs ) fixed the issue for me on version https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.9.0
just want to note that worker-shutdown-timeout might have unexpected side effect - every time Nginx receives reload signal it will spin up new configured number of workers without shutting down the current ones until 24 hours pass or there's no active connection anymore. And if you have controller reloading Nginx enough times then you can get your pod OOM killed.
Most helpful comment
just want to note that
worker-shutdown-timeoutmight have unexpected side effect - every time Nginx receives reload signal it will spin up new configured number of workers without shutting down the current ones until 24 hours pass or there's no active connection anymore. And if you have controller reloading Nginx enough times then you can get your pod OOM killed.