Url problem in AsyncCompletions
Confirm this is an issue with the Python library and not an underlying OpenAI API
- [x] This is an issue with the Python library
Describe the bug
In src/openai/resources/chat/completions/completions.py line 1928 in AsyncCompletions.create
(https://github.com/openai/openai-python/blob/d6bb8c14e66605ad2b7ed7bd62951014cd21b576/src/openai/resources/chat/completions/completions.py#L1928)
you set "/chat/completions" as a path variable, but you actually set it as a url, so the client tryes to hit a /chat/completions url without any baseUrl prepended to it.
To Reproduce
Error: openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
Code snippets
from llama_index.llms.openai import OpenAI
llm = OpenAI(
temperature=temp,
max_tokens=tokens,
model="model",
api_key="key",
api_base="baseUrl",
default_headers = {headers}
)
resp = llm.acomplete("Test")
#OR
from openai import OpenAI
client = OpenAI(
api_key="key",
base_url="url",
default_headers = {headers}
)
response = client.chat.completions.create(
model="model",
n=1,
messages=[messages]
)
OS
WSL
Python version
v3.12
Library version
1.65.3
Hi, I’d like to work on this issue. I’ll start investigating and update here if I have any questions.
Thanks for the report but I cannot reproduce this. Sounds like llama index is patching the client incorrectly?