ChatGPT
ChatGPT copied to clipboard
[BUG] Peer closed connection without sending complete message body
Describe the bug When chatGPT takes longer than 1 minute to respond to a prompt, it gets cut off and when on the browser client, it replies with "error in body stream". When used through the API, an exception occurs, saying "peer closed connection without sending complete message body (incomplete chunked read)".
To Reproduce Steps to reproduce the behavior:
- Use the sync chatbot
- use chatbot.get_chat_response, and make the prompt be something that lasts a while (a story, a program, etc)
- It will break
Expected behavior Preferably, it would just cut off the message return the part that was finished, but this output might be intentional. Maybe add an argument to ignore exceptions?
Output
Traceback (most recent call last):
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_async/http11.py", line 188, in _receive_event
event = self._h11_state.next_event()
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/h11/_connection.py", line 469, in next_event
event = self._extract_next_receive_event()
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/h11/_connection.py", line 419, in _extract_next_receive_event
event = self._reader.read_eof() # type: ignore[attr-defined]
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/h11/_readers.py", line 204, in read_eof
raise RemoteProtocolError(
h11._util.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_transports/default.py", line 239, in __aiter__
async for part in self._httpcore_stream:
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_async/connection_pool.py", line 346, in __aiter__
async for part in self._stream:
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_async/http11.py", line 315, in __aiter__
raise exc
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_async/http11.py", line 308, in __aiter__
async for chunk in self._connection._receive_response_body(**kwargs):
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_async/http11.py", line 177, in _receive_response_body
event = await self._receive_event(timeout=timeout)
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_async/http11.py", line 188, in _receive_event
event = self._h11_state.next_event()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc)
httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jack/PycharmProjects/GPTweet/bot.py", line 111, in <module>
main()
File "/Users/jack/PycharmProjects/GPTweet/bot.py", line 100, in main
response = chatbot.get_chat_response("write a long story about kazuya mishima and steve jobs", output="text")
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/revChatGPT/revChatGPT.py", line 545, in get_chat_response
return asyncio.run(coroutine_object)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/revChatGPT/revChatGPT.py", line 269, in get_chat_response
return await self.__get_chat_text(data)
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/revChatGPT/revChatGPT.py", line 209, in __get_chat_text
response = await s.post(
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_client.py", line 1848, in post
return await self.request(
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_client.py", line 1533, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_client.py", line 1634, in send
raise exc
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_client.py", line 1628, in send
await response.aread()
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_models.py", line 905, in aread
self._content = b"".join([part async for part in self.aiter_bytes()])
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_models.py", line 905, in <listcomp>
self._content = b"".join([part async for part in self.aiter_bytes()])
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_models.py", line 923, in aiter_bytes
async for raw_bytes in self.aiter_raw():
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_models.py", line 981, in aiter_raw
async for raw_stream_bytes in self.stream:
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_client.py", line 147, in __aiter__
async for chunk in self._stream:
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_transports/default.py", line 240, in __aiter__
yield part
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/jack/PycharmProjects/GPTweet/venv/lib/python3.9/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
Environment (please complete the following information):
Please update your packages before reporting! pip3 install --upgrade OpenAIAuth revChatGPT
- OS: MacOS
- Python version: 3.9
- ChatGPT Version: 0.0.44
- OpenAI Version: 0.0.6
Additional context This happens on my AWS server as well. For the browser client, there is a tampermonkey script that keeps the text even when the error occurs, this is at greasyfork.org/en/scripts/456551-chatgpt-fix-network-error Should I be streaming the text instead? How do I do that?
The same bug happens on https://chat.openai.com/chat
. Server side issue.
@OpenAI should hire me. I can fix it lol
(joke)

This prevents the message from being deleted on the website. Since the message isn't being deleted here, it does nothing.
When I call the function, I can't get the unfinished text because it returns an exception. Should I be streaming the text instead?
When I call the function, I can't get the unfinished text because it returns an exception. Should I be streaming the text instead?
Yes. If it is streaming, it will return the values up until the exception