langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Timeout Error OpenAI

Open shreyabhadwal opened this issue 1 year ago • 31 comments

I am facing a Warning similar to the one described here #3005

WARNING:langchain.embeddings.openai:Retrying langchain.embeddings.openai.embed_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600). It just keeps retrying. How do I get around this?

shreyabhadwal avatar Apr 25 '23 10:04 shreyabhadwal

Same for me as well

dnrico1 avatar Apr 26 '23 18:04 dnrico1

Getting the same error, with map-reduce summarizing chain. Vanilla open ai api works as expected.

La1c avatar Apr 27 '23 10:04 La1c

Same, following 👀

gabacode avatar Apr 27 '23 22:04 gabacode

@dnrico1 @La1c @gabacode When are y'all getting the error? For instance, I am getting it through my websocket app deployed on Azure (it's a chatbot application). Weirdly enough, I don't face it when I run the application locally.

shreyabhadwal avatar Apr 28 '23 05:04 shreyabhadwal

+1

OpenAI chat endpoint always seems to time out when using the summarization chain.

It works with the anthropic endpoint though.

bkamapantula avatar Apr 28 '23 11:04 bkamapantula

+1

@shreyabhadwal Experiencing the exact same behaviour. Local works well but it timeouts on Azure.

Binb1 avatar May 02 '23 11:05 Binb1

@Binb1 do the timeouts happen every time for you or occasionally? Also, are you using websockets or SSE?

shreyabhadwal avatar May 03 '23 13:05 shreyabhadwal

@shreyabhadwal Strangely enough, every time I deploy a new version of my app it seems to work well. But after a few minutes I get timeouts and I can't really understand why so far. I'm using SSE. I've tested a lot of different options and I have the same problem doing an Openai python SDK call or Langchain.

Binb1 avatar May 03 '23 13:05 Binb1

@Binb1 I experience the exact same behavior. It works well if I restart the app, and then after a few minutes when I try again I get timeouts. Very weird.

Interestingly, I have tried doing it without streaming and it seems to be working well. I don't quite understand it.

shreyabhadwal avatar May 03 '23 14:05 shreyabhadwal

@shreyabhadwal This makes me think that it is more Azure than langchain/openai related then 😕

I have not tried streaming yet as I don't really need it but it fails for me even without it. So strange.

It feels like the webapp needs a "warmup" before being able to make the calls.

Binb1 avatar May 03 '23 14:05 Binb1

Increasing the timeout fixes it for me! Thanks @timothyasp !

gabacode avatar May 03 '23 15:05 gabacode

+1 I set the timeout to 300s, but each time after 3 to 5 requests, It still fails as timeout...

firezym avatar May 05 '23 18:05 firezym

openai requests can go as long as 600s, and if you're doing large token prompts with gpt-4, 300s might be too low. So i'd set it at 600s and hope for the best. But i have noticed latencies on OpenAI's end being a lot higher over the last week or two.

-Tim

On Fri, May 5, 2023 at 11:29 AM firezym @.***> wrote:

+1 I set the timeout to 300s, but each time after 3 to 5 requests, It still fails as timeout...

— Reply to this email directly, view it on GitHub https://github.com/hwchase17/langchain/issues/3512#issuecomment-1536622830, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFMY47CRDATG6DVC7KAY5TXEVBGXANCNFSM6AAAAAAXKZOUOA . You are receiving this because you were mentioned.Message ID: @.***>

timothyasp avatar May 05 '23 19:05 timothyasp

@shreyabhadwal @Binb1 any luck with Azure?

Same issue local fine and fast, on Azure issues. Something seems to fall asleep after 4-10 minutes For me "Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry" seems to get called before the call for chat etc. After it times out it returns and is good until idle for 4-10 minutes So Increasing the timeout just increases the wait until it does timeout and calls again.

Driving me nuts and suspect there is a simple configuration I'm missing.

ColinTitahi avatar May 10 '23 00:05 ColinTitahi

Nope nothing yet @Binb1 @ColinTitahi are y'all using async calls to OpenAI?

shreyabhadwal avatar May 10 '23 06:05 shreyabhadwal

@shreyabhadwal Not explicitly so I don't think so. I'm using generate on the ChatOpenAI so I can get the llm_output token etc and another run call to a chat-conversational-react-description agent with some additional tools. These endpoints in my flask app are being called from the client JavaScript which uses async to wait for the response. It's like something gets set up when the flask app initially starts up and then falls asleep or disconnects or something after say 4-5 minutes and then has to wait for the timeout to occur to reconnect when the user calls it. Hence upping the timeout just increases that initial wait.

I'm using the OpenAI Chat model and hosting on an Azure web service.

ColinTitahi avatar May 10 '23 18:05 ColinTitahi

I am getting same error with model gpt-4-0314 and max_token = 2048 with request_timeout = 240 in local and live server. yesterday this was working fine

sagardspeed2 avatar May 18 '23 05:05 sagardspeed2

Same issue here. Running it in a Kubernetes Pod deployed to an AWS cluster and using async calls. Works perfectly locally but times out as soon as it's in the cluster.

Weirdly, calling the OpenAI LLM directly works, but running the Agent it gets stuck.

This works:

agent_executor = get_agent(user_token)
driver = agent_executor.agent.llm_chain.llm
cl = driver.client()
print(cl.create(model=driver.model_name, prompt='Tell me a poem'))

But this does not:

await agent_executor.arun(query)

DennisSchwartz avatar May 22 '23 18:05 DennisSchwartz

Ok so from the comments above I realised I was testing async in one case and blocking in the other.

print(await cl.acreate(model=driver.model_name, prompt='Tell me a poem'))

Does indeed also time out and fail to run! So there definitely seems to be an issue with the Async running of OpenAI. I'm going to try Anthropic for now. :)


UPDATE

I still can't make it run, neither for OpenAI nor Anthropic - but I think I know what's going on.

Our Kubernetes cluster running the application is blocking access to the internet using Squid Proxy. The OpenAI API is allowed, but only for HTTP requests. I think the OpenAI client is probably using web sockets to stream the responses and this is blocked by our proxy/firewall. Have resorted to using the sync application for now until we can figure out how to fix our proxy.

DennisSchwartz avatar May 22 '23 19:05 DennisSchwartz

I have the same issue. I am trying to hit completion api on text-davinci-003 engine. I am unable to replicate the issue on my local as it works always. When I containerize and deploy it in AWS Lambda, I get the following error sometimes (dont know when). Request timed out: HTTPSConnectionPool(host='instanceid.openai.azure.com', port=443): Max retries exceeded with url: //openai/deployments/textdavinci003/completions?api-version=2022-12-01 (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at XXXX>, 'Connection to instanceid.openai.azure.com timed out. (connect timeout=5)')

Any resoultion?

jpsmartbots avatar Jun 04 '23 12:06 jpsmartbots

It could be a problem with the SSL key. Set it up as a system environment variable. os.environ["REQUESTS_CA_BUNDLE"] = "PATH_TO_YOUR_CERTIFICATE/YOUR_CERTIFICATE.crt"

maxmarkov avatar Jun 13 '23 08:06 maxmarkov

Same issue here. Works for a bit and then starts timing out. I just can't nail down when it happens and why. There doesn't seem to be a rhyme or reason. Seems to happen a lot more on production (gcp) than locally. Although it happens on both. Seems to happen with short sentences more than long ones. Although not exclusively. It happens a LOT though. Like 1 out of 4 requests.

bigrig2212 avatar Jun 14 '23 19:06 bigrig2212

+1

flake9 avatar Aug 08 '23 16:08 flake9

I have the same issue. Works well locally but faces timeout issues when the app is deployed to Azure App Service for Linux Python or Custom Container.

HaochenQ avatar Aug 30 '23 05:08 HaochenQ

Hi HaochenQ

May be deploying your solution in a virtual machine might solve your problem. When I moved from AWS Lambda to EC2, the problem got resolved

jpsmartbots avatar Aug 30 '23 07:08 jpsmartbots

Hi HaochenQ

May be deploying your solution in a virtual machine might solve your problem. When I moved from AWS Lambda to EC2, the problem got resolved

Thank you@jpsmartbots, I tried to deploy my container with an Azure VM, but the issue persists.

For those of you who are facing 504 gateway timeout issues Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600). with Azure App Services, the issue is because the default HTTP timeout of Azure App Service 230/240 seconds while the default timeout of OpenAI APIs is 600 seconds. Before langchian hear back from OpenAI and do a retry, Azure returns an error and our app appears down. You can use request_timeout - OpenAIEmbeddings(request_timeout=30) to avoid time timeout from Azure side and somehow the retry call to OpenAI from langchain can always work.

Not sure why the langchian call to the OpenAI after a period of inactivity will fail and cause a timeout.

HaochenQ avatar Aug 31 '23 00:08 HaochenQ

Hey all, I believe this being fixed in the openai-python client should also help with this issue, and with generations:

https://github.com/openai/openai-python/pull/387

The async and sync request_timeouts are NOT identical.

ShantanuNair avatar Sep 20 '23 08:09 ShantanuNair

same problem

luoqingming110 avatar Nov 22 '23 02:11 luoqingming110

I'm running into the same issue. i am running a proxy container that talks to openai API works locally, but not when i deploy it to railway.

ryoung562 avatar Nov 22 '23 05:11 ryoung562

did anyone fix this, running into the same issue when I use summarize map reduce chain from Langhian on AWS lambda?

mallapraveen avatar Jan 25 '24 15:01 mallapraveen