Remote disconnect issues
⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
GPT-3 or GPT-4
- [ ] I am using Auto-GPT with GPT-3 (GPT-3.5)
Steps to reproduce 🕹
These errors appeared after entering my goals.
Current behavior 😯
Traceback (most recent call last):
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\packages\six.py", line 769, in reraise
raise value.with_traceback(tb)
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request
six.raise_from(e, None)
File "
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py", line 516, in request_raw result = _thread_context.session.request( File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 547, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\cwalt\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Auto-GPT-0.2.1\autogpt_main.py", line 53, in
Expected behavior 🤔
No response
Your prompt 📝
# Paste your prompt here
Are you getting the error continuously? Seems like the OpenAI API is aborting the connection. Try again.
Unfortunately yes. The same errors keep occurring regardless of what goals are inputted.
Are you in China? Are you using a VPN? Maybe check this https://github.com/openai/openai-python/issues/56 Unfortunately the issue is not with AutoGPT but with OpenAI and your environment/ISP.
I just started getting this error over the last hour or so as well. Everything was working "fine" before that (getting other errors, but stuff like the JSON error). I am using the gpt3only flag while I wait for gpt4 API access, if that helps.
@dragonmantank can you give us some more info on your internet connection/location and whether the problem still persists?
My connection is Spectrum cable internet, midwest, 400mb/12mb, and tends to be very stable. I do notice that it seems to be almost exclusively during unattended sessions (say after something like y -50) or potentially long sessions, like after 20 or so minutes of running. It seems worse after after agents are starting to get deployed, and happens on master or stable.
This makes it almost impossible to actually finish any project now since restarting the bot starts completely over.
[Edit] - Today I'm getting more The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. errors, so I'm really wondering if OpenAI was having issues (which AutoGPT just wasn't handling gracefully) and now OpenAI is returning a proper RateLimitError ... though you kind of end up in the same boat as now this run is dead and has to start all over.
Do those errors occur while processing the output of specific commands like browse_website or google? If so, which?
At least google and browse_website. Sometimes it's after it's committed chunks to memory and sometimes after it gets and parses a full response. Each time though the stacktrace shows it's in the OpenAI API calling code. Here's the most recent stack trace I have where it was actually rate limited, if I can get the generic timeout one again I'll post it but they looked like what OP posted:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/me/autogpt/__main__.py", line 53, in <module>
main()
File "/Users/me/autogpt/__main__.py", line 49, in main
agent.start_interaction_loop()
File "/Users/me/autogpt/agent/agent.py", line 170, in start_interaction_loop
self.memory.add(memory_to_add)
File "/Users/me/autogpt/memory/weaviate.py", line 57, in add
vector = get_ada_embedding(data)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/autogpt/memory/base.py", line 19, in get_ada_embedding
return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.
sys:1: ResourceWarning: unclosed <ssl.SSLSocket fd=10, family=2, type=1, proto=0, laddr=('172.16.0.175', 49397), raddr=('40.89.244.232', 443)>
sys:1: ResourceWarning: unclosed <ssl.SSLSocket fd=11, family=2, type=1, proto=0, laddr=('172.16.0.175', 49398), raddr=('20.83.18.132', 443)>
sys:1: ResourceWarning: unclosed <ssl.SSLSocket fd=9, family=2, type=1, proto=0, laddr=('172.16.0.175', 65493), raddr=('104.18.6.192', 443)>
Partial fix: #214
Appreaciate the help all!
@k-boikov, I am currently in CDMX (Mexico City) and was not using a VPN, but I could if needs be.
@Pwuts, thanks I'll check out #214
@lllMBPlll that PR is currently broken, but we're working on merging it.
Fixed via #1537
I am getting the same error even though I use the latest master branch. Any idea on how to fix it.

i had this issue too and i solve it by using VPN :)