aiohttp
aiohttp copied to clipboard
ConnectionResetError being output in normal, expected situations
Long story short
I am frequently seeing this message output from the aiohttp library.
Traceback (most recent call last): File "/usr/local/lib64/python3.6/site-packages/aiohttp/web_protocol.py", line 398, in start await resp.prepare(request) File "/usr/local/lib64/python3.6/site-packages/aiohttp/web_response.py", line 300, in prepare return await self._start(request) File "/usr/local/lib64/python3.6/site-packages/aiohttp/web_response.py", line 605, in _start return await super()._start(request) File "/usr/local/lib64/python3.6/site-packages/aiohttp/web_response.py", line 367, in _start await writer.write_headers(status_line, headers) File "/usr/local/lib64/python3.6/site-packages/aiohttp/http_writer.py", line 100, in write_headers self._write(buf) File "/usr/local/lib64/python3.6/site-packages/aiohttp/http_writer.py", line 57, in _write raise ConnectionResetError('Cannot write to closing transport') ConnectionResetError: Cannot write to closing transport
Expected behaviour
The library should automatically recover itself internally, or throw an error where my application code can handle it with a retry.
Actual behaviour
This error comes out in my logs and causes error detection systems to trigger.
Steps to reproduce
I don't fully understand how to reproduce it, but it happens frequently in my application that uses aiohttp as a client. The library seems to recover from it.
Your environment
aiohttp (3.1.3) client on Fedora relesae 27, Python 3.6.4.
According to logs a peer (browser or another client) closes connection during web handler processing. Not sure what do you expect. Connected socket is closed, the library cannot recover from it and you cannot retry because the connection was initiated by a peer, not your server side.
This is the aiohttp client library, not the server.
It is not.
aiohttp/web_response.py from logs is used by server code only.
Ok, interesting, let me take a closer look at what's going on.
Ok my app is using both client and server so I probably incorrectly assumed this was the client. Still, this is a normal condition, for the client to disconnect, and not really an "error" for the server, right? So I still think there should be a way to configure the server so that this error does not get logged.
hmm... i started to get the same error after upgrading to 3.4.0
We started to see this error very frequently after upgrading. confirmed in all our aiohttp services. @asvetlov
Update: looks like it's an issue when aiohttp handles keepalive connections.
We've apache as reverse proxy in front of aiohttp server, after disablereuse in apache config, this error went away.
- Keepalive handling was not changed in aiohttp 3.4
- If reverse proxy is configured to close connections early -- I have no idea what to do on aiohttp side.
I upgraded aiohttp to 3.4.0 from 3.0.7. So might be changes in 3.3
@asvetlov also, i don't see where i can configure keepalive timeout for aiohttp server
You can pass keepalive_timeout=120 (int, 75 by default) to http://docs.aiohttp.org/en/stable/web_reference.html#aiohttp.web.AppRunner constructor.
aiohttp.web.run_app doesn't support this option.
@asvetlov Does it honor gunicorn's keepalive_timeout if serving with gunicorn?
this ConnectionResetError happens to us as well. If a client closes the connection there is little anyone can do. It is a bit annoying to get a stacktrace. it would be best to at least have the option to turn the exception into a log message maybe, or to suppress it all together.
Not even a middleware is catching this exception it seems. Maybe if it make it to the alert system it is intresting to run stats on how many times it occurs etc.
Anyway, the stacktrace never touches the user code so it makes it really difficult to handle it. Handling it would mean for everyone having to subclass aiohttp server classes?
I have the same problem as @jam182 has. This exception spams our sentry, and I can't find a good way to handle it. Middleware doesn't help. Does anyone have a clear solution to catch this exception?
No, for now we monkey patched the aiohttp.http_writer.StreamWriter._write method. We simply catch the exception in there.
def _write_no_exception(self, chunk: bytes) -> None:
try:
self.original_write(chunk)
except ConnectionResetError as exc:
logger.debug('ConnectionResetError exception suppressed')
def patch_streamwriter():
http_writer.StreamWriter.original_write = http_writer.StreamWriter._write
http_writer.StreamWriter._write = _write_no_exception
logger.warning('StreamWriter patched to suppress ConnectionResetError\'s')
Something along those lines.
The method must be patched before aiohttp.web gets imported, or else the patch should be applied to web_protocol.StreamWriter._write
I agree with @alexdashkov, and have the same issue. I completely expect that this is an "error," however, it does spam my console and makes a muck of my error detection & handling code. I'd be happy to PR in a configuration flag to disable the error, print a small info, or something like that.
Python code for a possible solution:
# aiohttp/http_writer.py, line 66-67
if self._transport is None or self._transport.is_closing():
raise ConnectionResetError('Cannot write to closing transport')
# change to
suppress_connection_reset = os.environ.get('AIOHTTP_SUPPRESS_CONNECTION_RESET') # Up at the beginning of the file to avoid gets
if not suppress_connection_reset and (self._transport is None or self._transport.is_closing()):
raise ConnectionResetError('Cannot write to closing transport')
Or something of that sort.
I have also encountered this when talking to a server which was not sending keep-alive flags in the response headers (for POST requests); and turning right around and making a second response was encountering the ConnectionResetError: Cannot write to closing transport error. Adding a tiny await asyncio.sleep(0.5) call prevented there error; but a simple asyncio.sleep(0) call to force a possible task switch did not.
My suspicion is that the server is not listening to eof_received() event om the transport. Then again, that should be indication that the client is not going to write anything more.
Could also be a socket that is reset instead of reused.
We do not use keepalive and we are seeing these exceptions. Yes, it is likely that the occasional client is dropping the connection before slurping the last byte. We need a way to report these exceptions as warnings since we can't control what clients do.