langchain
langchain copied to clipboard
Handling "openAI API error received"
I have a function that calls the openai API multiple times. Yesterday the function crashed occasionally with the following error in my logs
error_code=None error_message='The server had an error while processing your request. Sorry about that!' error_param=None error_type=server_error message='OpenAI API error received' stream_error=False
However I cant reproduce this error today to find out which langchain method call is throwing this. My function has a mix of
load_qa_chain(**params)(args)
LLMChain(llm=llm, prompt=prompt).run(text)
LLMChain(llm=llm, prompt=prompt).predict(text_2)
Should I be wrapping each of these langchain method calls using try/except
?
try:
LLMChain(llm=llm, prompt=prompt).run(text)
except Exception as e:
log.error(e)
I did try it and when the error message is printed, it appears that it was not catch by the try/except... and even appears to retry the api call?
Thanks!
It seems to just be logging and exiting, which makes sense. If you want it to retry, you must somehow restart your process in the Except clause. For instance:
while True:
try:
agent.prompt(input())
except Exception as err:
print("ERROR:", err)
In this case the while loop will just make sure it does not quit.
this is caused by the openai server - i bet yesterday it was a bit overloaded and not working great, but is fixed now (it sometimes goes down)
Is there a way to handle errors from openai even if the retries fails. I am using predict at many places, and this is very diffcult and not so cleaner way to add try and except at each place, instead I can define some httperror
Using the ChatOpenAI
model, if the response from the OpenAI API call looks like this:
{
"code": null,
"type": "server_error",
"param": null,
"message": "That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID c58c33110e4907638de58bec34af86e5 in your message.)"
}
Then we get the following uncaught exception in langchain/chat_models/openai.py
:
File "/var/app/current/commandbar/ml/langchain/Suggestions.py", line 76, in llm_generate_suggestions
response = self.llm(
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/langchain/chat_models/base.py", line 177, in __call__
generation = self.generate(
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/langchain/chat_models/base.py", line 90, in generate
raise e
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/langchain/chat_models/base.py", line 82, in generate
results = [
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/langchain/chat_models/base.py", line 83, in <listcomp>
self._generate(m, stop=stop, run_manager=run_manager)
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/langchain/chat_models/openai.py", line 294, in _generate
return self._create_chat_result(response)
File "/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/langchain/chat_models/openai.py", line 309, in _create_chat_result
for res in response["choices"]:
Exception Type: KeyError
Exception Value: 'choices'
I had a look thru the langchain
tests and couldn't figure out how (or if) you have any tests with a mock OpenAI client -- if you could point me in the right direction for how you handle mocking, I could probably write a test for this.
Another error that can be relevant (it is for me) is when the OpenAI API key is invalid. This is what is returned:
error_code=invalid_api_key error_message= error_param=None error_type=invalid_request_error
When I catch the exception, the error is <empty message>
. So that is not very helpful, but the error_code
is, how can we get to that?
Hi, @gameveloster! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you were experiencing an "openAI API error" in a function that calls the openai API multiple times. You were unsure which langchain method call was throwing the error and asked if you should wrap each method call with a try/except block. However, when you tried it, the error was not caught by the try/except block. TheApeMachine suggested using a while loop to retry the process in the except clause.
It seems that the issue has been resolved since then. TheApeMachine mentioned that the error was caused by the overloaded openai server, which has been fixed. Additionally, jluxenberg reported an uncaught exception in langchain/chat_models/openai.py and offered to write a test for it. pors also mentioned another relevant error when the OpenAI API key is invalid and asked how to access the error code.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project!