hyper
hyper copied to clipboard
Hyper tries to reuse closed connection and throwns exception
I ran into hyper having problem if the remote side closes the connection after a given amount of requests. At that time hyper throws:
Traceback (most recent call last):
File "/tmp/test.py", line 26, in <module>
resp = s.get('https://server.example.com/', headers={'Host': 'server.example.com'})
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 521, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/site-packages/hyper/contrib.py", line 80, in send
resp = conn.get_response()
File "/usr/local/lib/python3.6/site-packages/hyper/common/connection.py", line 129, in get_response
return self._conn.get_response(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/hyper/http20/connection.py", line 312, in get_response
return HTTP20Response(stream.getheaders(), stream)
File "/usr/local/lib/python3.6/site-packages/hyper/http20/stream.py", line 230, in getheaders
self._recv_cb(stream_id=self.stream_id)
File "/usr/local/lib/python3.6/site-packages/hyper/http20/connection.py", line 771, in _recv_cb
self._single_read()
File "/usr/local/lib/python3.6/site-packages/hyper/http20/connection.py", line 669, in _single_read
events = conn.receive_data(data)
File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 1531, in receive_data
events.extend(self._receive_frame(frame))
File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 1554, in _receive_frame
frames, events = self._frame_dispatch_table[frame.__class__](frame)
File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 1623, in _receive_headers_frame
ConnectionInputs.RECV_HEADERS
File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 246, in process_input
"Invalid input %s in state %s" % (input_, old_state)
h2.exceptions.ProtocolError: Invalid input ConnectionInputs.RECV_HEADERS in state ConnectionState.CLOSED
After much testing I came to the conclusion that it's hyper's fault not the server's (nginx in this case). Both golang and firefox had no problem with nginx's behavior.
I believe the problem is because the connection dict defined at https://github.com/Lukasa/hyper/blob/669253fe136f28ebe160c9db99257937a1c52a1b/hyper/contrib.py#L33 is never cleaned of closed connections. So what happens is that a new request that should make a new connection comes along but hyper tries to reuse the old closed one and then obviously throws an exception.
I have made a repo with which the bug is easily reproducible using nginx 1.12.1 and python 3.6.2. The stack trace above is from the test.py file in the repo.
Yeah, that sounds wrong. I don't have any time to rush into fixing this in the short term, so I think this would be a good thing for someone to write a test case for and then make the appropriate patch. :smile:
I ran into hyper having problem if the remote side closes the connection after a given amount of requests. At that time hyper throws:
Traceback (most recent call last): File "/tmp/test.py", line 26, in <module> resp = s.get('https://server.example.com/', headers={'Host': 'server.example.com'}) File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 521, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/site-packages/hyper/contrib.py", line 80, in send resp = conn.get_response() File "/usr/local/lib/python3.6/site-packages/hyper/common/connection.py", line 129, in get_response return self._conn.get_response(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/hyper/http20/connection.py", line 312, in get_response return HTTP20Response(stream.getheaders(), stream) File "/usr/local/lib/python3.6/site-packages/hyper/http20/stream.py", line 230, in getheaders self._recv_cb(stream_id=self.stream_id) File "/usr/local/lib/python3.6/site-packages/hyper/http20/connection.py", line 771, in _recv_cb self._single_read() File "/usr/local/lib/python3.6/site-packages/hyper/http20/connection.py", line 669, in _single_read events = conn.receive_data(data) File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 1531, in receive_data events.extend(self._receive_frame(frame)) File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 1554, in _receive_frame frames, events = self._frame_dispatch_table[frame.__class__](frame) File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 1623, in _receive_headers_frame ConnectionInputs.RECV_HEADERS File "/usr/local/lib/python3.6/site-packages/h2/connection.py", line 246, in process_input "Invalid input %s in state %s" % (input_, old_state) h2.exceptions.ProtocolError: Invalid input ConnectionInputs.RECV_HEADERS in state ConnectionState.CLOSED
After much testing I came to the conclusion that it's hyper's fault not the server's (nginx in this case). Both golang and firefox had no problem with nginx's behavior.
I believe the problem is because the connection dict defined at https://github.com/Lukasa/hyper/blob/669253fe136f28ebe160c9db99257937a1c52a1b/hyper/contrib.py#L33 is never cleaned of closed connections. So what happens is that a new request that should make a new connection comes along but hyper tries to reuse the old closed one and then obviously throws an exception.
I have made a repo with which the bug is easily reproducible using nginx 1.12.1 and python 3.6.2. The stack trace above is from the test.py file in the repo.
Prob a long shot since its so long ago, but did you ever make a match for this yourself?
@BRK0014 I think I tried, bu I no longer work for the company I was at the time, so it will be hard to check :(. And I remember my tries weren't very succesful and if I had anything close to working I would've posted it :(
I think in the end I just increased the nginx maximum number of requests before it send GOAWAY, which was okay for my usecase.