gunicorn icon indicating copy to clipboard operation
gunicorn copied to clipboard

Make graceful shut-down keep-alive behavior consistent

Open tilgovi opened this issue 9 years ago • 24 comments

Following on from #922, the handling of keep-alive connections during graceful shutdown is not really specified anywhere and may not be consistent among workers.

  • [x] Describe the intended behavior
  • [x] Take stock of current behavior and make issues for each worker
  • [x] Ship it

tilgovi avatar Mar 25 '16 20:03 tilgovi

@tilgovi do you need anything from me there?

benoitc avatar Apr 07 '16 19:04 benoitc

If you want to "describe the intended behavior" that would be helpful, otherwise I'll propose it.

tilgovi avatar Apr 07 '16 19:04 tilgovi

Sorry I missed your answer.

Graceful shutdown only means we let a time for the requests to finish.

  1. when the signal is received, the workers stops to accept any new connections
  2. At the graceful time, all still running client connections are closed (sockets are closed), kept alived or not.

Speaking of keepalive connections I think we should stop the request loop when the signal is received instead of accepting any new requests. Thoughts?

benoitc avatar Oct 16 '16 17:10 benoitc

At the graceful time, all still running client connections are closed (sockets are closed), kept alived or not.

@benoitc Considering a sync worker, without threads, I believe no other connections are aborted/lost than the current request in-process, because connections are queued at the master process and not at the worker process level?

If so, can you clarify what you mean by "all still running client connections are closed" - I assume you refer to threaded/async workers here (where multiple requests may be processed concurrently, compared to sync worker without threads)?

tuukkamustonen avatar Apr 03 '17 09:04 tuukkamustonen

@tuukkamustonen master doesn’t queue any connection. each workers is responsible to accept a connection. afaik connections are queued at the system level. When the master receive the hup signal it notify the worker about it and they stop to accept new connections. Then running connections (those already accepted) will have the graceful time to finish or be forcefully closed.

benoitc avatar Jun 29 '18 21:06 benoitc

afaik connections are queued at the system level

Ah, I wonder how that works / where it's instructed... well, no need to go that deep :)

When the master receive the HUP signal it notify the worker about it and they stop to accept new connections. Then running connections (those already accepted) will have the graceful time to finish or be forcefully closed.

Ok. This summarizes it nicely.

tuukkamustonen avatar Jul 03 '18 07:07 tuukkamustonen

@tilgovi we probably should close that issue?

benoitc avatar Oct 09 '18 05:10 benoitc

I would like to keep this one open. I'm not convinced we have consistent behavior here yet.

tilgovi avatar Oct 09 '18 18:10 tilgovi

How to reproduce problem: (in fact problem easily can be observed on busy production during graceful shutdown)

  1. Run gunicorn with keepalive (trivial app returning some data) $ gunicorn --max-requests 512 --keep-alive 2 --threads 20 --workers 4 t:app

run apache benchmark:

$ ab -n 10000 -c 20 -s 1 -k -r 127.0.0.1:8000/

... Concurrency Level: 20 Time taken for tests: 2.693 seconds Complete requests: 10000 Failed requests: 435

See > 4% failed requests just due to restarted workers (in this case by max-requests)

  1. Run gunicorn without keepalive (same app) $ gunicorn --max-requests 512 --keep-alive 0 --threads 20 --workers 4 t:app

$ ab -n 10000 -c 20 -s 1 -k -r 127.0.0.1:8000/

... Complete requests: 10000 Failed requests: 0

See no failed requests

tried on gunicorn up to 20.0.0

vgrebenschikov avatar Nov 15 '19 22:11 vgrebenschikov

Probably it worth resolve problem in a following way:

In case of graceful shutdown on keep-alive connection try to serve one more request after graceful shutdown request and send Connection: close in response to force sender not use this socket any more for next request, if no request arrived in reasonable timeframe (i.e. 1s) just close connection.

Yes, there is small possibility for race (when server decides to close when client sends request), but this will completely close window for problems on heavy load when requests are followed one by one.

vgrebenschikov avatar Nov 15 '19 22:11 vgrebenschikov

cc @tilgovi ^^

In fact there is two schools there imo

  1. we consider the connection is already living so we could accept one or more quest inside during the graceful time (what you're describing). Ie we keep open the connections as long as we receive a request on them and we are in the graceful time.
  2. if HUP or USR2 is sent to gunicorn, it means we want to change as fast as possible the configuration or the application version. In such case the gracefultime is here so we make sure we terminate the current requests cleanly (finishing a transaction, etc...). But we don't want any new request on the old version. In such case it make sense to also not accept new requests on keepalived connections and close them once the current request terminate.

I'm in favour of 2 which may be more safe. Thoughts?

benoitc avatar Nov 23 '19 00:11 benoitc

I am very much in favor of Option 2. That was the behavior I've assumed and I've made some changes to this end in, linked from #922.

I don't know which workers implement all of these behaviors, but we should check:

  1. Close the socket. This is necessary in case Gunicorn is being shut down and not reloaded. The OS should be notified as soon as possible to stop accepting requests so that new requests can be directed to a different node in a load balanced deployment.

  2. Close connections after the next response. Do not allow connection keep-alive. For the reasons you state, it is best that any future request go to the new version of the code, or to another node.

Any other behaviors we should describe before we audit?

tilgovi avatar Nov 23 '19 22:11 tilgovi

Close the socket. This is necessary in case Gunicorn is being shut down and not reloaded. The OS should be notified as soon as possible to stop accepting requests so that new requests can be directed to a different node in a load balanced deployment.

This is #922. I think it is done for all workers and the arbiter.

Close connections after the next response. Do not allow connection keep-alive. For the reasons you state, it is best that any future request go to the new version of the code, or to another node.

This is this ticket. We should make sure all workers do this.

tilgovi avatar Nov 23 '19 22:11 tilgovi

Close the socket. This is necessary in case Gunicorn is being shut down and not reloaded. The OS should be notified as soon as possible to stop accepting requests so that new requests can be directed to a different node in a load balanced deployment.

This is #922. I think it is done for all workers and the arbiter.

I think this is still done, but we have an new issue due to this at #1725. The same issue might exist for other worker types than eventlet.

Close connections after the next response. Do not allow connection keep-alive. For the reasons you state, it is best that any future request go to the new version of the code, or to another node.

This is this ticket. We should make sure all workers do this.

I think this is now done for the threaded worker and the async workers in #2288, https://github.com/benoitc/gunicorn/commit/ebb41da4726e311080e69d1b76d1bc2769897e78 and https://github.com/benoitc/gunicorn/commit/4ae2a05c37b332773997f90ba7542713b9bf8274.

tilgovi avatar Apr 21 '20 21:04 tilgovi

I'm going to close this issue because I think it's mostly addressed now. I don't think the tornado worker is implementing graceful shutdown, but that can be a separate ticket.

tilgovi avatar Apr 21 '20 21:04 tilgovi

I've opened #2317 for Tornado and I'll close this.

tilgovi avatar Apr 21 '20 21:04 tilgovi

cc @tilgovi ^^ ... 2. if HUP or USR2 is sent to gunicorn, it means we want to change as fast as possible the configuration or the application version. In such case the gracefultime is here so we make sure we terminate the current requests cleanly (finishing a transaction, etc...). But we don't want any new request on the old version. In such case it make sense to also not accept new requests on keepalived connections and close them once the current request terminate.

I'm in favour of 2 which may be more safe. Thoughts?

Probably, I was not clear enough ... for keep-alive connection there are no way close connection "safe", client is admissible to send next request just after receiving response (without any wait), and then, if we will just close connection after last request finished and HUP signaled - we, for sure, can fail next request unexpectedly (see experiment above which prove that).

So, only "safe" way will be either

  1. get a "waiting" window, when client does not send next request and just wait on keep-alive socket (no clue how to get it with guarantee, just some approximation) or
  2. send as minimum one response with Connection:close before actual closing connection, but, we may wait for long until next request ...

vgrebenschikov avatar Jan 19 '21 20:01 vgrebenschikov

The gevent and eventlet workers do not have any logic to close keepalive connections during graceful shutdown. Instead, they have logic to force "Connection: close" on requests that happen during graceful shutdown. So, I believe it is already the case that they will send a "Connection: close" before actually closing the connection.

There is always a possibility that a long request ends close enough to the graceful timeout deadline that the client never gets to send another request and discover "Connection: close" before the server closes the connection forcefully. I don't see any way to avoid that. Set a longer graceful timeout to handle this.

tilgovi avatar Jan 24 '21 03:01 tilgovi

Re-opening until I can address issues in the eventlet and threaded server.

Just to re-iterate, on graceful shutdown a worker should:

  • Continue to notify the arbiter that it is alive.
  • Stop accepting new requests.
  • Close the socket.
  • Force Connection: close for any future requests on existing keep-alive connections.
  • Shutdown when there are no further connections or the timeout expires.

Right now, the eventlet worker cannot handle existing keep-alive connections because it fails on listener.getsockename() after the socket is closed. The threaded worker is not handling existing keepalive requests and is not notifying the arbiter that it is still alive.

I'll work to get both PRs submitted this week. I apologize for not being able to do so sooner.

tilgovi avatar Feb 16 '21 03:02 tilgovi

gunicorn = "20.0.4" worker_class = "gthread" threads = 20 workers = 1 keepalive = 70 timeout = 500

In my case, when gunicorn master receives SIGHUP signal (sent by consul-template to reload refreshed secrets written in a file on local disk), it creates a new worker and gracefully shuts down old worker. However, during the transition from old to the new worker, http connections cached b/w client and old worker (keep-alive connections) are stale now and any request sent by client to the server that happen to use stale socket will hung and eventually timeout.

Essentially, the threaded worker is not able to handle existing keep-alive requests.

vupadhyay avatar Feb 17 '21 06:02 vupadhyay

Hi @tilgovi
Is there any update for this issue?

Can this issue be closed ?

ccnupq avatar Jun 22 '23 18:06 ccnupq

There are still issues as documented in my last comment.

tilgovi avatar Dec 29 '23 03:12 tilgovi

Hi, i have been running gunicorn 22.0 python 3.11 worker 10 threads 300 max requests 1000 keepalive 75 graceful-timeout 80 timeout 200 in production and this issue might have been there in previous versions also. There were many recv() failed (104: Connection reset by peer) while reading response header from upstream errors in nginx logs. tcpdump showed gunicorn sending RST packets to nginx at the time of errors. Disabling keepalive and switching to 20.1.0 release of gunicorn fixes the issue.

geevarghesest avatar Jun 05 '24 01:06 geevarghesest

no activity since awhile. closing feel @tilgovi feel free to reopen if you still want to work on it :)

benoitc avatar Aug 06 '24 17:08 benoitc