Accessing openai.ChatCompletion, no longer supported in openai>=1.0.0 - how to update the code to use the new API? (AZURE)
I'm using Azure API keys.
The error in backend logs indicates that the [openai.ChatCompletion] API has been removed in [openai>=1.0.0].
You need to update your code to use the new API.
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run openai migrate to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. pip install openai==0.28
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
We don't plan to officially support Azure in this repo at the moment. But will leave this issue open for others to chime in.
From my experience it's hard to make it work on Azure. Would someone be able to suggest how to make it work locally on ollama ? Is there a PR for ollama available that works?
See https://github.com/abi/screenshot-to-code/issues/354#issuecomment-2435479853
Does it mean no one else made this work recently on azure with openai new API
@Marcelfoxion - i have just linked Azure on a fork this morning if you are still looking I can push it and share the link
@ashsmith88 Yes pleas that would be awesome. Thank you!
Hey @ashsmith88 is it this azure PR? here https://github.com/ashsmith88/screenshot-to-code/tree/main
If you have a sec please let me know if this is what you was about to share. As I havent heard from you since.
Cheers
Apologies @Marcelfoxion - been really busy and forgot to reply!
I have just pushed my (rather hacky) changes here: https://github.com/ashsmith88/screenshot-to-code/tree/quick-branch
The branch works with Azure, but also has additional changes as I have got it to do a 2nd request with OpenAI to get the code in react which I can use as a template in my development - so you are probably only interested in some of the backend changes.
From the Azure point of view, I added an env arg: USE_AZURE=True and then put my Azure keys into the two args:
OPENAI_API_KEY=
OPENAI_BASE_URL=
in config.py
USE_AZURE = os.environ.get("USE_AZURE", False)
The updated this function to look like this:
async def stream_openai_response(
messages: List[ChatCompletionMessageParam],
api_key: str,
base_url: str | None,
callback: Callable[[str], Awaitable[None]],
model: Llm,
) -> Completion:
start_time = time.time()
if USE_AZURE: # <--- Imported from config
client = AsyncAzureOpenAI(api_key=api_key, azure_endpoint=base_url, api_version="2024-08-01-preview")
params = {
"model": "gpt-4o",
"messages": messages,
"timeout": 600,
}
else:
client = AsyncOpenAI(api_key=api_key, base_url=base_url)
params = {
"model": model.value,
"messages": messages,
"timeout": 600,
}
You will obviously need to make sure you have an Azure deployment and update anything above (i.e. I have hard coded my model as wasn't worried)
This isn't a production ready fix at all, but works perfectly for me when using locally.