gunicorn
gunicorn copied to clipboard
fix thundering herd
Currently all workers are accepting in // with no dialog which make them sometimes accepting the same connection triggering EAGAIN and increasing the CPU usage for nothing.
While modern OSes have mostly fixed that it still can happen in Gunicorn since we can listen on multiple interface.
Solution
The solution I see for that is to introduce some communication between the arbiter and the workers. The accept
will still be executed directly in the callers workers if the socket accept returns ok. Otherwise the listen socket is "selected" in the arbiter using an eventloop and an input ready callback will run socket accept from a worker when the event is triggered.
Implementation details:
While it can change in the future by adding more methods like sharing memory between the arbiter and the worker, we will took the simple path for now:
- 1 pipe will be maintained between the arbiter and the worker. This pipe will be used for the signaling.
- The arbiter will put all listener sockets in an eventloop. Once the read event is triggered it will notify one of the available workers to accept.
- For the eventloop it will use the selectors in python 3. It will backported for python 2.x
Usual garbage collection will take care about closing the pipe when needed.
* note* Possibly, the pipe will also let the workers notify the arbiter they are alive.
Problems to solve
Each async worker are accepting using their own method without much consideration to gunicorn right now. For example the gevent worker is using the gevent Server object, tornado and eventlet use similar system. We should find a way to adapt them to use the new socket signaling system.
Thoughts? Any other suggestions?
Glad to see that there's going to be a thunder-lock in gunicorn.
FWIW: http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html
I think lock based approach is better than signaling based one. Arbiter doesn't know about which worker is busy and how many connection coming to socket.
@methane not sure to follow, using IPC is about adding a lock system somehow... ( a semaphore or sort of is just that ;) .
The arbiter will know that a worker is busy or not because it will notify arbiter about it (releasing the lock he put on accept).
Asking as an outsider, is this something that is feasible to do for the next minor version release or is this a giant feature?
Have their been reports about this being an issue? Seems awfully complex. Reading the link from @methane I'd probably vote for the signaling approach as well but as you point out that means we have to alter each worker so that they aren't selecting on the TCP socket and instead wait for the signal on the pipe. Seems reasonable I guess, just complicated.
Following is comparing flows accepting new connection.
arbiter solution
- New connection coming
- arbiter wake up from epoll
- arbitor selects worker and send signal from pipe
- worker wake up from epoll
- Try accept
Lock solition
-2. Worker wake up and get lock -1. Start epoll 0. New connection coming
- Worker wake up from epoll.
- accept connection and releases lock.
My thought
Lock solution is fewer context switch.
Lock solution is also better on concurrency. Under situation of massive new connection coming, arbiter may be bottleneck and workers can't work while many cores idle.
So I prefer lock solution.
@methane The down side of the lock is that its a single point of contention. With the signaling approach there's room for optimizations like running multiple accepts that don't require the synchronization under load. Not to mention the sheer complexity of attempting to write and support the cross-platform IPC locking scheme. Given the caveats in the article you linked to earlier I'm not really keen on attempting such a thing.
Contemplating the uwsgi article that @methane linked to earlier I'm still not convinced that this is even an issue we should be attempting to "fix" seeing as its really not an issue for modern kernels. I'd vote to tell people that actually experience this that they just need to upgrade their deployment targets. Then again I'm fairly adverse to introducing complexity.
@davisp if we were simply blocking on accept()
in our workers that would be one thing, but, partly because we allow multiple listening sockets, our workers generally select on them, which means the kernel will wake them all.
Oh right.
FWIW: http://stackoverflow.com/questions/12494914/how-does-the-operating-system-load-balance-between-multiple-processes-accepting/12502808#12502808 https://www.citi.umich.edu/u/cel/linux-scalability/reports/accept.html
According the article of uwsgi: (Note: Apache is really smart about that, when it only needs to wait on a single file descriptor, it only calls accept() taking advantage of modern kernels anti-thundering herd policies)
How about we fix this common case where we only have one listening socket?
+1 On Jul 16, 2014 9:10 PM, "pypeng" [email protected] wrote:
According the article of uwsgi: (Note: Apache is really smart about that, when it only needs to wait on a single file descriptor, it only calls accept() taking advantage of modern kernels anti-thundering herd policies)
How about we fix this common case where we only have one listening socket?
— Reply to this email directly or view it on GitHub https://github.com/benoitc/gunicorn/issues/792#issuecomment-49256537.
@diwu1989 forgot to answer but this feature will appear in 20.0 in October.
@benoitc was this fixed? You may want to update the documentation here if so - http://docs.gunicorn.org/en/stable/faq.html#does-gunicorn-suffer-from-the-thundering-herd-problem
FWI, Linux 4.5 introduced EPOLLEXCLUSIVE. http://kernelnewbies.org/Linux_4.5#head-64f3b13b9026133a232a418a27ac029e21fff2ba
So this was added to the R20.0 mile stone, then removed. Have we decided not to work on this anymore then?
I made the 20 milestone and provisionally added things without discussion or input from others. It was aspirational.
As far as I know we don't have a consensus work plan for the milestone. We should probably discuss soon :-)
Ah, I see Benoit added this one, then removed it. I would guess similar thoughts to mine.
Python has select.EPOLLEXCLUSIVE
now. If someone wants to implement that, I would gladly review the PR.
@benoitc https://uwsgi-docs.readthedocs.io/en/latest/articles/SerializingAccept.html#how-application-server-developers-solved-it
Fast answer: they generally do not solve/care it
?
this would need to be fixed for every worker class right? seems like it's not that worth fixing...
wouldn't be too hard to implement on sync worker; gthread would need to wait on https://bugs.python.org/issue35517, which appears to be dead
no idea how this would be done with gevent worker; maybe the arbiter would have to be proxying requests to workers??
Python 3.6 added support for epoll's EPOLLEXCLUSIVE, which will solve Thundering Herd when running on Linux 4.5+. See: https://docs.python.org/3/library/select.html#edge-and-level-trigger-polling-epoll-objects
"EPOLLEXCLUSIVE: Wake only one epoll object when the associated fd has an event. The default (if this flag is not set) is to wake all epoll objects polling on a fd.
New in version 3.6: EPOLLEXCLUSIVE was added. It’s only supported by Linux Kernel 4.5 or later."
Wouldn't it be workaround to enable SO_REUSEPORT option in gunicorn? As it mentioned on libuv.
After this I got only such syscalls:
epoll_wait(7, [], 64, 1000) = 0
fchmod(6, 001) = 0
getppid() = 85
getpid() = 95
epoll_wait(7, [], 64, 1000) = 0
@horpto I review the code in 20.1.0, have found already usereuseport
but curious, why not create socket in workers but arbitrators? @brosner
@qq413434162 yes, but programmer should turn it on clearly in config
@qq413434162 yes, but programmer should turn it on clearly in config
Yes! why not create socket in workers but arbiter.run()?
to monitor them and allow hot upgrades of gunicorn (using USR2).
Le ven. 1 avr. 2022 à 12:40, Baob.wu @.***> a écrit :
@qq413434162 https://github.com/qq413434162 yes, but programmer should turn it on clearly in config
Yes! why not create socket in workers but arbiter.run()?
— Reply to this email directly, view it on GitHub https://github.com/benoitc/gunicorn/issues/792#issuecomment-1085738244, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAADRIQYU4W4XHIQCLMJK7TVC3HALANCNFSM4AQRECDA . You are receiving this because you were mentioned.Message ID: @.***>
-- Sent from my Mobile
Thank you for your answer!:) My doubt is Why still use reuseport in arbiter? what it does is?
As I see 1.reuseport make kernel choose thread/process weekup load balance in different socket(mean not the same fd) 2.Kernel add The WQ_FLAG_EXCLUSIVE to solve thundering herd probleam in linux2.6,weakness is weekup The thread/process not load balance。
@benoitc Is it safe to assume that thundering herd problem wont occur in gunicorn, at least with gevent worker type, if using linux 4.5+ ?