openai-python
                                
                                 openai-python copied to clipboard
                                
                                    openai-python copied to clipboard
                            
                            
                            
                        InvalidRequestError:Invalid URL when using chatcompletion
Describe the bug
when I upgrade openai to v0.27.0 and reproduce your sample: import openai
openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )
I got:
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Temp\ipykernel_4560\2473871474.py", line 1, in 
File "D:\Anaconda\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(*args, **kwargs)
File "D:\Anaconda\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create response, _, api_key = requestor.request(
File "D:\Anaconda\lib\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream)
File "D:\Anaconda\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line(
File "D:\Anaconda\lib\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response(
InvalidRequestError: Invalid URL (POST /v1/chat/completions)
To Reproduce
import openai
openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ] )
Code snippets
No response
OS
Windows
Python version
3.9.13
Library version
0.27.0
I can confirm that the issue was reproduced in openai-0.26.5 and openai-0.27.0 with the error message "Error communicating with OpenAI: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
Well I think I've found the reason. I was using the API key generated before and that went well with "openai.Completion.create" and text-davinci-003 model. After I regenerated a new API key and replace it, this problem is solved.
The issue is the new gpt-3.5-turbo endpoints require you to use the ChatCompletion class instead of Completion.
response = openai.ChatCompletion.create(
    model='gpt-3.5-turbo',
      messages=[
        {"role": "user", "content": "Who won the world series in 2020?"}],
    max_tokens=193,
    temperature=0,
)
You can find more information here. The current error message is quite poor though and explaining this to the user would be a large improvement. I'm open to submitting a pull request.
Well that will be nice of you. By the way, I wonder if you know how to set the conversation to be more than one round. At :https://help.openai.com/en/articles/7039783-chatgpt-api-faq, I can't find the answer. Might it is related with the parameter 'stream'?
How can I make a continuing conversation, which means ChatGPT can make conversations based on the previous records.
Thanks a million
You have to include array including past conversation in the request because it was stated that model has not memory of past conversations:
messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ]
I‘m facing the same issue. And I have undated my API key, but it didn't help. So are there any solutions to this issue?
You have to include array including past conversation in the request because it was stated that model has not memory of past conversations:
messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ]
I see. So that means we are facing the 4000 tokens limit. And have to figure out the accurate token length of messages now.
I can confirm that I am getting an invalid URL error when using the 0.27.0 version of the Python library. In my code I am calling await openai.ChatCompletion.acreate(.......) when making the attempt, and model is set to gpt-3.5-turbo.
I also generated a new API key to see if that had any effect, it did not.
These are the specific error messages:
2023-03-02 03:46:53,773 INFO message='OpenAI API response' path=https://api.openai.com/v1/engines/gpt-3.5-turbo/chat/completions processing_ms=None request_id=a7eb3ee14b3d512492d8ef7b002b3c74 response_code=404
2023-03-02 03:46:53,773 INFO error_code=None error_message='Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
This test is from MacOS with Python 3.9.6, library version 0.27.0.
You have to include array including past conversation in the request because it was stated that model has not memory of past conversations:
messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"} ]I see. So that means we are facing the 4000 tokens limit. And have to figure out the accurate token length of messages now.
This maybe out of topic for this issue but I think you are looking for something like: https://github.com/daveshap/LongtermChatExternalSources
Check the parameters:
use 'model' instead of 'engine' as parameter
openai.ChatCompletion.create(model="gpt-3.5-turbo", ....)
Check the parameters: use 'model' instead of 'engine' as parameter
openai.ChatCompletion.create(model="gpt-3.5-turbo", ....)
Thank you @SparkleBo! That resolved the issue for me. I had found the engine_api_resource.py stuff and was getting close, but you solved it for me first!
i have run the below code by just simply installing the installing openai library version 0.27.0 (pip install openai==0.27.0) then following code a import openai from pprint import pprint openai.api_key ="**************************************#
output = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "python library"}, {"role": "user", "content": "python pulp library"}, {"role": "assistant", "content": "great have a good time."}, {"role": "user", "content": "tell me anything pulp library"}, {"role": "user", "content": "write code to solve linear programe problem of three constraints "},
]
)
pprint(output['choices'][0]['message']['content']) print(output) Note: you have to copy api key from open AI account and paste into environment variable so you can't get any environment error. then past same api key in above openai.api_key = put here you api key and run it
EDIT: updating API key, refreshing env variables, closing all terminals, VS code, killing explorer.exe then run explorer.exe again. Reopened project and now able to successfully interact. Thanks for tips!
self.handle_error_response( openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-3.5-turbo/chat/completions
Folks, a few things:
- The code above in the original issue works fine
- If you are having issues running it, and you are on v0.27.0, you need to use a virtual env
- You do NOT need to re-generate an API key to make a request
- Please read more here: https://platform.openai.com/docs/guides/chat/introduction
If you continue to have issues, please open a new issue : )