langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Frequent request timed out error

Open KalakondaKrish opened this issue 2 years ago β€’ 9 comments
trafficstars

I am getting this error whenever the time is greater than 60 seconds. I tried giving timeout=120 seconds in ChatOpenAI().

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).

What is the reason for this issue and how can I rectify it?

KalakondaKrish avatar Apr 17 '23 07:04 KalakondaKrish

+1 I'm seeing a lot of these as with ChatOpenAIΒ and retrievers connected.

homanp avatar Apr 17 '23 11:04 homanp

I couldn't reproduce the error rn, but if you see the request_timeout param is not set, this issue is a bug.

amk9978 avatar Apr 17 '23 14:04 amk9978

Seems to work again so probably OpenAI API issues?

homanp avatar Apr 17 '23 16:04 homanp

+1, I'm consistently encountering the same error today.

joybro avatar Apr 17 '23 17:04 joybro

+1, seeing the same issue when using langchain only. Direct calls to Open AI works fine.

mkhanplative avatar Apr 17 '23 19:04 mkhanplative

gpt-4 is always timing out for me (gpt-3.5-turbo works fine). Increasing the request_timeout helps:

llm = ChatOpenAI(temperature=0, model_name=model, request_timeout=120)

rafaelquintanilha avatar Apr 17 '23 20:04 rafaelquintanilha

I have set up upto 20 seconds in openai.py

def _create_retry_decorator(self) -> Callable[[Any], Any]: import openai

    min_seconds = 20
    max_seconds = 60
    # Wait 2^x * 1 second between each retry starting with
    # 4 seconds, then up to 10 seconds, then 10 seconds afterwards
    return retry(
        reraise=True,
        stop=stop_after_attempt(self.max_retries),
        wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
        retry=(
            retry_if_exception_type(openai.error.Timeout)
            | retry_if_exception_type(openai.error.APIError)
            | retry_if_exception_type(openai.error.APIConnectionError)
            | retry_if_exception_type(openai.error.RateLimitError)
            | retry_if_exception_type(openai.error.ServiceUnavailableError)
        ),
        before_sleep=before_sleep_log(logger, logging.WARNING),
    )

But still have an rate limit error:

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 20.0 seconds as it raised RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-oTVXM6oG3frz1CFRijB3heo9 on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method

is there only 3 requests per minute with a normal user?

dtthanh1971 avatar Apr 18 '23 08:04 dtthanh1971

gpt-4 is always timing out for me (gpt-3.5-turbo works fine). Increasing the request_timeout helps:

llm = ChatOpenAI(temperature=0, model_name=model, request_timeout=120)

Increasing the timeout helped. Thanks for the tip, @rafaelquintanilha !

mkhanplative avatar Apr 18 '23 09:04 mkhanplative

USER_NAME = "Agent 007" # The name you want to use when interviewing the agent. LLM = ChatOpenAI(max_tokens=1500, request_timeout=120) # Can be any LLM you want.

But I did not work for my case.

dtthanh1971 avatar Apr 18 '23 09:04 dtthanh1971

+1 frequently timeout with gpt-4, I increased the request_timeout, but didn't help much. Tried OpenAI direct call, works as expected. Any workaround or potential root cause?

Usage: Refine summarization chain

neethanwu avatar Apr 19 '23 01:04 neethanwu

Increasing the request_timeout value helped. Thanks.

KalakondaKrish avatar Apr 19 '23 02:04 KalakondaKrish

Not sure if this should be marked as completed. It's probably still a "bug" since it happens more often than not when using gpt-4. Maybe the request timeout should be set to 120 if model_name is "gpt-4" by default.

achempak-polymer avatar Apr 26 '23 03:04 achempak-polymer

Had this appear for some complex prompts today. Changed timeout to 120. It helped!

votkon avatar Apr 27 '23 13:04 votkon

This is driving me completely batty - hoping for any advice. I'm running a flask app on Azure - I can't replicate the issue locally but this is preventing me rolling it out

Increasing the timeout just increases how long until this error is raised It appears to happen BEFORE I call chat.generate or an agent Even before I define the base llm

I know this may be more of an Azure thing but any advice?

ColinTitahi avatar May 09 '23 21:05 ColinTitahi

Today I am getting the same error every time with model gpt-4-0314, I also set the request_timeout to 240 and even after that, I am still getting same error every time. my max_token limit is 2048.

yesterday that was working well, but today it is giving me same error and driving crazy

sagardspeed2 avatar May 18 '23 05:05 sagardspeed2

I have same problem with gpt-4. My script workes well. From yesterday its timeout all the time :).

Suprimepl avatar May 18 '23 07:05 Suprimepl

@Suprimepl which model you are using in your script ?

sagardspeed2 avatar May 18 '23 11:05 sagardspeed2

model="gpt-4",

Suprimepl avatar May 18 '23 12:05 Suprimepl

That same problem is happening to me with "model=gpt-3.5-turbo" and "request_timeout=120"

santialferez avatar May 18 '23 14:05 santialferez

Quite the same problem for me since midnight when I used "gpt-4-0314". It worked well before I went to sleep, but most of the stuff timeout today

Django-Jiang avatar May 22 '23 18:05 Django-Jiang

Getting this same error. Code seems to be fine but problem is exponentially worse when executing within AWS

rcro19 avatar May 24 '23 00:05 rcro19

I think it's OPEN AI fault :/

Suprimepl avatar May 24 '23 09:05 Suprimepl

Still driving me batty. Looking at server config gunicorn on azure gthread / gevent worker and thread numbers timeouts, Auzure timeouts etc. Could just be the size of the VM, but I shouldn't need a production level server for testing with 5 users. Getting to the point I think I might just have to re-write in node/js.
Anyone have the magic configuration for gunicorn that works as well as the development flask server?

ColinTitahi avatar May 29 '23 03:05 ColinTitahi

Still driving me batty. Looking at server config gunicorn on azure gthread / gevent worker and thread numbers timeouts, Auzure timeouts etc. Could just be the size of the VM, but I shouldn't need a production level server for testing with 5 users. Getting to the point I think I might just have to re-write in node/js. Anyone have the magic configuration for gunicorn that works as well as the development flask server?

It's common to have to increase the gunicorn timeout when running on prod, their default timeout is too short.

However, from a designing perspective, calling Langchain may take unpredictable times, so a safer solution in this case would implement some sort of queue system (for example using Celery). This way the processing happens in background and you won't have timeout issues with gunicorn.

That said, you can try to increase gunicorn timeout by doing something like gunicorn --timeout 300 [rest of commands]

rafaelquintanilha avatar May 29 '23 13:05 rafaelquintanilha

Thanks @rafaelquintanilha my timeout was at 600. Will investigate Celery. My attempts at using gevent were unsuccessful - like crashing the container due to ignorance.

ColinTitahi avatar May 30 '23 01:05 ColinTitahi

Any updates on this issue?

zeke-john avatar Jun 17 '23 04:06 zeke-john

I still have this issue . Does anyone knows a workaround ?

SinaArdehali avatar Jun 18 '23 14:06 SinaArdehali

Hi, for me the problem went away when I set "request_timeout=600" (or more than 600, and I think is the default value in the last versions of langchain). I think that this problem is mainly a time request issue.

santialferez avatar Jun 19 '23 07:06 santialferez

To ensure that retries are made until the Timeout is reached. I think it would be better to set max_retries=12 for the default setting, and if you change the max_seconds or multiplier setting, set max_retries for retries to be performed within the timeout period.

masa8 avatar Jun 19 '23 11:06 masa8

I still have this issue . Does anyone knows how to solve it?