openai-python
openai-python copied to clipboard
openai.error.APIConnectionError: Error communicating with OpenAI
Hello, we are getting this issue in our production environment, but seems to be working fine locally. Do you know what the issue might be?
Traceback (most recent call last):
File "/env/lib/python3.9/site-packages/openai/api_requestor.py", line 279, in request_raw
result = _thread_context.session.request(
File "/env/lib/python3.9/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/env/lib/python3.9/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/env/lib/python3.9/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/engines/content-filter-alpha/completions (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f08403ee970>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/beni_services/usecase.py", line 33, in execute
response_parameters = self.do_execute()
File "/app/product_catalog/use_cases/get_dynamic_filters.py", line 150, in do_execute
product_details_using_gpt3 = self._get_product_details_using_gpt3(product_details=product_details)
File "/app/product_catalog/use_cases/get_dynamic_filters.py", line 276, in _get_product_details_using_gpt3
if self._safe_to_use_openai(openai_prompt):
File "/app/product_catalog/use_cases/get_dynamic_filters.py", line 302, in _safe_to_use_openai
response = openai.Completion.create(
File "/env/lib/python3.9/site-packages/openai/api_resources/completion.py", line 31, in create
return super().create(*args, **kwargs)
File "/env/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 90, in create
response, _, api_key = requestor.request(
File "/env/lib/python3.9/site-packages/openai/api_requestor.py", line 100, in request
result = self.request_raw(
File "/env/lib/python3.9/site-packages/openai/api_requestor.py", line 289, in request_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
Thank you!
Hi @celinemol, thanks for the issue! Can you paste the python command thats generating this error?
Yes, I believe we're just using the Python code recommended in your documentation: https://beta.openai.com/docs/engines/content-filter
@staticmethod
def _safe_to_use_openai(openai_prompt):
"""
Implement the OpenAI Content Filter to exclude all 'unsafe' (CF=2) output.
Documentation here: https://beta.openai.com/docs/engines/content-filter
:param openai_prompt: str, The text we want to send to OpenAI
:return: Bool, True if it is safe to use openai
"""
response = openai.Completion.create(
engine="content-filter-alpha",
prompt="<|endoftext|>"+openai_prompt+"\n--\nLabel:",
temperature=0,
max_tokens=1,
top_p=0,
logprobs=10
)
output_label = response["choices"][0]["text"]
# This is the probability at which we evaluate that a "2" is likely real
# vs. should be discarded as a false positive
toxic_threshold = -0.355
if output_label == "2":
# If the model returns "2", return its confidence in 2 or other output-labels
logprobs = response["choices"][0]["logprobs"]["top_logprobs"][0]
# If the model is not sufficiently confident in "2",
# choose the most probable of "0" or "1"
# Guaranteed to have a confidence for 2 since this was the selected token.
if logprobs["2"] < toxic_threshold:
logprob_0 = logprobs.get("0", None)
logprob_1 = logprobs.get("1", None)
# If both "0" and "1" have probabilities, set the output label
# to whichever is most probable
if logprob_0 is not None and logprob_1 is not None:
if logprob_0 >= logprob_1:
output_label = "0"
else:
output_label = "1"
# If only one of them is found, set output label to that one
elif logprob_0 is not None:
output_label = "0"
elif logprob_1 is not None:
output_label = "1"
# If neither "0" or "1" are available, stick with "2"
# by leaving output_label unchanged.
# if the most probable token is none of "0", "1", or "2"
# this should be set as unsafe
if output_label not in ["0", "1", "2"]:
output_label = "2"
return output_label != "2"
Given that the command works locally but not in production, my first guess would be that a firewall on your end is blocking connections to api.openai.com. Can you make calls to other engines in production?
No, we're also getting timed out on endpoint /v1/engines/text-davinci-001/completions
... we didn't make any changes though to our production environment, we haven't deployed anything in the last 7 days... do you have any ideas what the issue might be?
I don't think this is an issue related to the python package. Would you mind reaching out to [email protected]? They should be able to help you track down this issue
Ok, will do. Thank you!
Ok I think I might know the issue. We have too many connections that get created. Is there any way we can create a session and reuse it instead of creating a new connection each time?
I believe this code creates a new connection request each time:
response = openai.Completion.create(
engine="content-filter-alpha",
prompt="<|endoftext|>"+openai_prompt+"\n--\nLabel:",
temperature=0,
max_tokens=1,
top_p=0,
logprobs=10
)
Is there any way we can create a session and reuse a connection to make a new request each time using the OpenAI Python library or do we need to just use the requests library to make this possible?
did you find a solution to this? I'm having the same problem when calling openai.Completion.create() four times in a function.
same issue while trying to download openai:openai:0.3.0 rep in android studio.
Failed to resolve: com.openai:openai:0.3.0
I'm getting the error as well, and in my case there can't be too many connections being a reason as I'm simply calling the function once every a couple of minutes in response to user request (and the only user is me).
In case you have your API key in a file, check you don't have a CRLF at the end. I had it, got "API Connection Error", fixed the file, and now it connects without error.
In case you have your API key in a file, check you don't have a CRLF at the end. I had it, got "API Connection Error", fixed the file, and now it connects without error.
My API key is in the config.yml file. The issue is that the connection failure is intermittent. It only fails once in a few times.
I'm also getting this error while making multiple requests to the chat endpoint, one for each iteration, for a total of 15+4+2 = 21 requests. The requests all use the same message template. The only thing that might be different are the values for certain placeholders.
This has been happening since this Friday (17.03.2023).
Initially, on Friday, I was able to make these requests, multiple times.
This started to happen after I asked the model to restrict the output to a certain number of characters for each request, but now it's happening even without this restriction and with the original message, with which I was able to make the initial requests.
This happens even if I am not connected through the VPN.
Of course, this behaviour is undesirable and makes the API unusable, so I cannot build anything with it.
I'm receiving this on gpt-4, but gpt-3.5-turbo works fine. I'm still getting charged for gpt-4 though. Lots of 502s and Timeouts, but the charges keep coming. We were supposed to launch this Friday.
Is anyone working on a PR to solve this? This is a fairly serious issue, since every request calls aiohttp.ClientSession(), which the aiohttp docs specifically state must be avoided. This issue specifically makes openai-python unsuitable for production-level code by default.
Ok, I found the undocumented ContextVar, openai.aiosession
. Basically, to avoid this issue, you have to manage this session yourself. To work with langchain, I created wrappers for openai.ChatCompletion
and openai.Embedding
, which set the session context var on create/acreate calls.
However, I still think this issue should be addressed. By default, the openai client should respect the usage guidelines for aiohttp, So this module should be responsible for opening/closing an openai application client session.
@andgate Awesome work! Would you mind sharing a snippet on how to accomplish what you did?
This should be fixed in v1, which uses a different http library (httpx). Please upgrade to the latest version and let us know if the problem persists.
@rattrayalex Apologize in advance. Can you provide more details on what needs to be upgraded to the latest version or V1 please?
You bet the, migration guide is here: https://github.com/openai/openai-python/discussions/742