openai-python icon indicating copy to clipboard operation
openai-python copied to clipboard

Connection failed.

Open 7733012 opened this issue 1 year ago • 1 comments

Describe the bug

Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/completions (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002CE434DAC50>, 'Connection to api.openai.com timed out. (connect timeout=600)'))

To Reproduce

Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/completions (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000002CE434DAC50>, 'Connection to api.openai.com timed out. (connect timeout=600)'))

Code snippets

No response

OS

win

Python version

3.11

Library version

last

7733012 avatar Mar 08 '23 07:03 7733012

I'm not sure about how you used it, but downgrade urllib3==1.25.11 or if you used proxy & async, upgrade aiohttp may solve this.

ref: https://zhuanlan.zhihu.com/p/611080662?utm_id=0 https://github.com/nonebot/nonebot2/issues/1784#issuecomment-1462842717

TheLZY avatar Mar 09 '23 21:03 TheLZY

Did we get any resolution towards this issue? I'm also facing the same issue in our integration when using "Completion/ChatCompletion" randomly.

work-peeagar avatar Mar 27 '23 06:03 work-peeagar

I'm unable to reproduce this issue. Is there a set of steps to reproduce or any other information someone can provide to help diagnose this issue? (how many times its occurring, etc)

hallacy avatar Mar 30 '23 04:03 hallacy

I have a same problem: Connect to api.openai.com:443 [api.openai.com/108.160.167.167] failed: Connection timed out: connect. I use Java and Apach httpclient to test.

Echoidf avatar Apr 02 '23 06:04 Echoidf

I also have this problem: Max retries exceeded with url: /v1/chat/completions I use python3.8 and python 3.9.

zxjhellow2 avatar Apr 03 '23 07:04 zxjhellow2

Anyone figured out a fix for this? This started happening for me after I upgraded Anaconda

swamichandra avatar Apr 08 '23 02:04 swamichandra

Without a clear set of replication steps, I'm not able to dig into this issue. I'm going to close issue but happy to reopen it if there's replication steps.

hallacy avatar Apr 08 '23 15:04 hallacy

I believe the steps is make a normal connection to a slow pc and you will get the same result, the problem is with the time out any way to set the time out limit in a o autogen openai connection? For me my call is the next one:


llm_config = {"config_list": config_list, "temperature":0, "seed":42}

# Perform Completion
question = 'Who are you?'
response = autogen.oai.Completion.create(config_list=config_list, prompt=question, temperature=0, max_tokens=1000)
ans = autogen.oai.Completion.extract_text(response)[0]

print("Model response:", ans)

from what I understand the timeout comes from the ai call somewhere, is there a way to set a timeout time? since the pc who is hosting the LLM is slow, it needs more then 60 seconds to generate the response. or whatever the time is, there is a 60 but i dont think is seconds, cause it cames way earlier then that, is like 30 seconds after the run script call it stops the script with that message...

Btw that forum in chinese is less then usefull for someone making questions in English.

kjonh2 avatar Dec 03 '23 02:12 kjonh2

Don't want to spam, but found out by reading the definition of the requests in specific this:

result = self.request_raw(
            method.lower(),
            url,
            params=params,
            supplied_headers=headers,
            files=files,
            stream=stream,
            request_id=request_id,
            request_timeout=request_timeout,
        )

line 289 of api_requester.py

That you can set an atribute called request_timeout.

so in my code on the last message is something missing that I didnt show the config_list:

config_list = [
        {
            'model': 'TheBloke/Mistral-7b-Instruct-V0.1-GGUF',
            'api_key': 'mykey',
            'api_type': 'openai',
            'api_base': 'http://xxx.xxx.xxx.xxx:1234/v1',
            'api_version': 'whatever
        }
]

If I add the atribute request_timeout with a value that value is the timeout time you will get, if you set 10 you will get 10 seconds for timeout, means that if the response from the server is not generated in 10 seconds it ends the call. so with that attribute I added 10000 wich is more then enough time for it to finish the generation. the code for the config list is now something like this:

config_list = [
        {
            'model': 'TheBloke/Mistral-7b-Instruct-V0.1-GGUF',
            'api_key': 'mykey',
            'api_type': 'openai',
            'api_base': 'http://xxx.xxx.xxx.xxx:1234/v1',
            'api_version': 'whatever,
            'request_timeout': 10000
        }
]

this is the solution for those of you that keep getting request timeout because your pc is to slow or something.

if you are connecting to some fast pc, that you are sure its fast enough to get the response but still getting timeout, check if the request you are making is good In my case I know it was because of the server speed, cause I can see it generating the tokens in the server log, but the connection was being closed before it finish, so all I needed was really to know how to extend the stupid timout value.

I say again, if your problem is the same as mine, well go ahed and add that atribute, that just does what I said, extend the time for the connection to the server, but if you are connecting to an online server, and you have no acess to the logs of the servers to see if it is generating something or not I recomend to not use that, specially if is something like openAI, cause it can be generating something in a loop or whatever, and is spending tokens still, so you will get charged for that, and if it is an online server it should complete the request way faster then 30 seconds or whatever the default timeout is. Im not responsable for your bill at the end of the month, use it at your own risk :p

Hope I helped someone ;)

Ps: I'm sure that the request_timeout atribute is messing with the 60 you see in the timeout message cause I tryed the code with other values, like 6 and it closes it after 6 seconds, so I'm positive that is is meant to be used for that ;)

kjonh2 avatar Dec 03 '23 04:12 kjonh2

FYI, the new v1 of this library has a 10 minute timeout by default, and easier configuration to customize as well.

Of course, not everyone is able to migrate right away, so thank you for sharing your learnings!

rattrayalex avatar Dec 03 '23 05:12 rattrayalex

what do you mean with the v1 of this library? I have the latest version if im not mistaken, how do I use v1? isn't it auto installing the latest version if I just do pip install openai? Or is it just about the implementation that you need to change? like the code you write to make the calls? I'm sorry if I sound really noob, I dont know tecnical names about things, I code as an hobby to make my own things, I'm not a real programmer, I just try things and make them work by looking for solutions or scraping the original library code to understand how it works. But I dont know how to explain very well, neither I understand the tecnical details Just that if I type this function it returns me this, and I can parse or do whatever with the result, :) then I make my own logic, wich most of the time, is bad writen, and when I see older code, I put my hands in my head thinking about how did I type that. It was so simple and I made it so complicated.

So with that in mind, I just read the new v1 forum link: and from what I understand the code above is still ok right? Im just starting with development in openai library, hopefully I dont start in the wrong code, I want to start with the most recent methods, and btw the call I have above well it connects trough the open ai library somehow, but actually is autogen that deals with it, so in my case I think I cannot do much, in this situation, since all I'm doing is passing the llm config params to the autogen method that will make the call. Do you recomend not using autogen for it not being updated? I was just messing around to see if I can get multiagents working since I cannot work with the true openai api because of financial problems right now? Not the right time for me to spend money with hobbies, that is why I'm running it localy in a cheap laptop that manages to run it at a really slow speed :) I know his has nothing to do with the topic in question here, but just trying to make people understand where I come from, and why I said do it at your own risk if you are doing what I do :p Because true devs know better how to do stuff and being up to date with the new code I hope, like microsoft and autogen :)

kjonh2 avatar Dec 03 '23 07:12 kjonh2