Currently all workers are accepting in // with no dialog which make them sometimes accepting the same connection triggering EAGAIN and increasing the CPU usage for nothing.
While modern OSes have mostly fixed that it still can happen in Gunicorn since we can listen on multiple interface.
The solution I see for that is to introduce some communication between the arbiter and the workers. The accept
will still be executed directly in the callers workers if the socket accept returns ok. Otherwise the listen socket is "selected" in the arbiter using an eventloop and an input ready callback will run socket accept from a worker when the event is triggered.
While it can change in the future by adding more methods like sharing memory between the arbiter and the worker, we will took the simple path for now:
Usual garbage collection will take care about closing the pipe when needed.
* note* Possibly, the pipe will also let the workers notify the arbiter they are alive.
Problems to solve
Each async worker are accepting using their own method without much consideration to gunicorn right now. For example the gevent worker is using the gevent Server object, tornado and eventlet use similar system. We should find a way to adapt them to use the new socket signaling system.
Thoughts? Any other suggestions?
Glad to see that there's going to be a thunder-lock in gunicorn.
I think lock based approach is better than signaling based one.
Arbiter doesn't know about which worker is busy and how many connection coming to socket.
@methane not sure to follow, using IPC is about adding a lock system somehow... ( a semaphore or sort of is just that ;) .
The arbiter will know that a worker is busy or not because it will notify arbiter about it (releasing the lock he put on accept).
Asking as an outsider, is this something that is feasible to do for the next minor version release or is this a giant feature?
Have their been reports about this being an issue? Seems awfully complex. Reading the link from @methane I'd probably vote for the signaling approach as well but as you point out that means we have to alter each worker so that they aren't selecting on the TCP socket and instead wait for the signal on the pipe. Seems reasonable I guess, just complicated.
Following is comparing flows accepting new connection.
-2. Worker wake up and get lock
-1. Start epoll
Lock solution is fewer context switch.
Lock solution is also better on concurrency. Under situation of massive new connection coming,
arbiter may be bottleneck and workers can't work while many cores idle.
So I prefer lock solution.
@methane The down side of the lock is that its a single point of contention. With the signaling approach there's room for optimizations like running multiple accepts that don't require the synchronization under load. Not to mention the sheer complexity of attempting to write and support the cross-platform IPC locking scheme. Given the caveats in the article you linked to earlier I'm not really keen on attempting such a thing.
Contemplating the uwsgi article that @methane linked to earlier I'm still not convinced that this is even an issue we should be attempting to "fix" seeing as its really not an issue for modern kernels. I'd vote to tell people that actually experience this that they just need to upgrade their deployment targets. Then again I'm fairly adverse to introducing complexity.
@davisp if we were simply blocking on accept()
in our workers that would be one thing, but, partly because we allow multiple listening sockets, our workers generally select on them, which means the kernel will wake them all.
Oh right.
According the article of uwsgi: (Note: Apache is really smart about that, when it only needs to wait on a single file descriptor, it only calls accept() taking advantage of modern kernels anti-thundering herd policies)
How about we fix this common case where we only have one listening socket?
+1
On Jul 16, 2014 9:10 PM, "pypeng" [email protected] wrote:
According the article of uwsgi: (Note: Apache is really smart about that,
when it only needs to wait on a single file descriptor, it only calls
accept() taking advantage of modern kernels anti-thundering herd policies)How about we fix this common case where we only have one listening socket?
—
Reply to this email directly or view it on GitHub
https://github.com/benoitc/gunicorn/issues/792#issuecomment-49256537.
@diwu1989 forgot to answer but this feature will appear in 20.0 in October.
@benoitc was this fixed? You may want to update the documentation here if so - http://docs.gunicorn.org/en/stable/faq.html#does-gunicorn-suffer-from-the-thundering-herd-problem
FWI, Linux 4.5 introduced EPOLLEXCLUSIVE.
http://kernelnewbies.org/Linux_4.5#head-64f3b13b9026133a232a418a27ac029e21fff2ba
So this was added to the R20.0 mile stone, then removed. Have we decided not to work on this anymore then?
I made the 20 milestone and provisionally added things without discussion or input from others. It was aspirational.
As far as I know we don't have a consensus work plan for the milestone. We should probably discuss soon :-)
Ah, I see Benoit added this one, then removed it. I would guess similar thoughts to mine.
Python has select.EPOLLEXCLUSIVE
now. If someone wants to implement that, I would gladly review the PR.
@benoitc
https://uwsgi-docs.readthedocs.io/en/latest/articles/SerializingAccept.html#how-application-server-developers-solved-it
Fast answer: they generally do not solve/care it
?
this would need to be fixed for every worker class right? seems like it's not that worth fixing...
wouldn't be too hard to implement on sync worker; gthread would need to wait on https://bugs.python.org/issue35517, which appears to be dead
no idea how this would be done with gevent worker; maybe the arbiter would have to be proxying requests to workers??
Python 3.6 added support for epoll's EPOLLEXCLUSIVE, which will solve Thundering Herd when running on Linux 4.5+. See: https://docs.python.org/3/library/select.html#edge-and-level-trigger-polling-epoll-objects
"EPOLLEXCLUSIVE: Wake only one epoll object when the associated fd has an event. The default (if this flag is not set) is to wake all epoll objects polling on a fd.
New in version 3.6: EPOLLEXCLUSIVE was added. It’s only supported by Linux Kernel 4.5 or later."
Most helpful comment
@benoitc was this fixed? You may want to update the documentation here if so - http://docs.gunicorn.org/en/stable/faq.html#does-gunicorn-suffer-from-the-thundering-herd-problem