hpack
hpack copied to clipboard
RuntimeError: deque mutated during iteration
Not familiar with this package, but I ended up with this RuntimeError from an httpx get.
Backing off get_with_retry(...) for 0.1s (httpx.RemoteProtocolError: <ConnectionTerminated error_code:9, last_stream_id:15, additional_data:None>)
Backing off get_with_retry(...) for 0.4s (httpx.RemoteProtocolError: <ConnectionTerminated error_code:9, last_stream_id:15, additional_data:None>)
Traceback (most recent call last):
File "/host/usr/local/bin/watch.py", line 567, in <module>
main()
File "/host/usr/local/bin/watch.py", line 396, in main
for _ in executor.map(download, secs, repeat(c), repeat(ret_periods), repeat(display_price)):
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/host/usr/local/bin/watch.py", line 145, in download
resp = get_with_retry(url, c, timeout=5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/backoff/_sync.py", line 105, in retry
ret = target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/backoff/_sync.py", line 48, in retry
ret = target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/host/usr/local/bin/mb_httpx.py", line 104, in get_with_retry
return c.get(
^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_client.py", line 1054, in get
return self.request(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_client.py", line 827, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/usr/local/lib/python3.12/dist-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpcore/_sync/connection.py", line 101, in handle_request
return self._connection.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http2.py", line 185, in handle_request
raise exc
File "/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http2.py", line 142, in handle_request
self._send_request_headers(request=request, stream_id=stream_id)
File "/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http2.py", line 247, in _send_request_headers
self._h2_state.send_headers(stream_id, headers, end_stream=end_stream)
File "/usr/local/lib/python3.12/dist-packages/h2/connection.py", line 770, in send_headers
frames = stream.send_headers(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/h2/stream.py", line 867, in send_headers
frames = self._build_headers_frames(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/h2/stream.py", line 1254, in _build_headers_frames
encoded_headers = encoder.encode(headers)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/hpack/hpack.py", line 255, in encode
header_block.append(self.add(header, sensitive, huffman))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/hpack/hpack.py", line 280, in add
match = self.header_table.search(name, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/hpack/table.py", line 184, in search
for (i, (n, v)) in enumerate(self.dynamic_entries):
RuntimeError: deque mutated during iteration
@mborsetti thanks for reporting this. I don't see an issue here directly, but there might be a deeply hidden issue. Did you also report this to the httpx project? It's likely that this is an downstream issue. The hpack library itself does not handle multi-threading or concurrent access - this is up to the consumer of the library, in this case httpx. It could be related to https://github.com/encode/httpx/issues/3002
@mborsetti thanks for reporting this. I don't see an issue here directly, but there might be a deeply hidden issue. Did you also report this to the httpx project? It's likely that this is an downstream issue. The hpack library itself does not handle multi-threading or concurrent access - this is up to the consumer of the library, in this case httpx. It could be related to encode/httpx#3002
@Kriechi Thanks for your reply. I am not familiar with the architecture so only reported it here; I will cross-report to httpx next.
Cross-posted at https://github.com/encode/httpx/discussions/3279
So apparently this is a thread-safety issue: https://github.com/ros-visualization/rqt_robot_monitor/issues/6
@Kriechi we can consider adding a lock into the search method or make it work over a copy. Looking at the stack trace, it looks like this is a check before adding a new value so I think a lock is more appropriate. (or not use a deck and use a dict which should be thread safe with O(1) look ups)
@BYK not sure how a issue from 2018 related to hpack here. As stated above: hpack library itself does not handle multi-threading or concurrent access - this is up to the consumer of the library.
@Kriechi well here's the break down (I think it is mostly h2's fault btw which you are also a maintainer of):
h2uses a singlehpack.Encoderandhpack.Decoderinstance for an entireH2Connectionhere: https://github.com/python-hyper/h2/blob/2730c5b053b2ab674de6c4e4f7b3e9d47dae3867/src/h2/connection.py#L292-L293- Although we have a single instance of these per connection, a connection can have multiple concurrent streams with their own headers
- When a stream tries to send headers, they are sent to the same
encoderinstance causing potential race conditions like this
Proposal:
- Move this issue to
h2 - Make
h2use per-streamhpack.Encoderandhpack.Decoderinstances.
Makes sense?
Maybe I'm misreading the reported error here, but it seems to me that httpx uses a connection pool with asyncio / concurrent futures.
Citing from the h2 README - highlight my own:
[h2] does not provide a parsing layer, a network layer, or any rules about concurrency. Instead, it's a purely in-memory solution, defined in terms of data actions and HTTP/2 frames. This is one building block of a full Python HTTP implementation.
If a consumer of the h2 and hpack libraries decides to implement multi-threading or concurrency as part of their application, it is their responsibility to ensure proper locking of the h2/hpack resources. Accessing h2 Connection or Stream objects from two different threads concurrently without safe guards is not supported - as stated in the h2 README.
So the intended and correct way of using the h2 API would be, for example, to use a mutex to protect/lock the entire h2 connection and stream state, before calling any API such as stream.send_headers(...). If the h2 connection and stream state is not protected in such a way, a race condition is highly likely and will result in errors as as the ones reported above.
Regarding your proposal of using per-stream Encoder/Decoder instances: My understanding of this section in the HTTP/2 RFC is that this would not be a valid solution:
Each endpoint has an HPACK encoder context and an HPACK decoder context that are used for encoding and decoding all field blocks on a connection.