Azure OpenAI support
Summary
Support fore OpenAI endpoints running on Azure.
Description
Desired settings (note azure_openai, deployment_id and api_version are not currently valid settings keys) :
"assistant": {
"version": "1",
"provider": {
"type": "azure_openai",
"api_url": "https://name.openai.azure.com",
"deployment_id": "id",
"api_version": "2023-05-15"
}
}
Documentation here:
- https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
trying to setup azure openai using my apikey for the openapi input and adding this config to my settings
"assistant": {
"version": "1",
"provider": {
"type": "azure_openai",
"api_url": "https://{my-resource}}.openai.azure.com",
"deployment_id": "gpt-4o",
"api_version": "2024-02-15-preview"
}
}
Its just unclear how to fully set this up...
Use a proxy that makes the Azure API come closer to the OpenAI api, such as LiteLLM. Configure it like so:
"language_models": {
"openai": {
"version": "1",
// LightLLM
"api_url": "http://127.0.0.1:11435/v1"
}
}
If you disabled key in the proxy just enter any key into zed or it will refuse to send queries.
Looks like it is possible to consume the Azure API directly. This is what I have:
"language_models": {
"openai": {
"version": "1",
"api_url": "https://name.openai.azure.com/openai/deployments/gpt-4o-2024-05-13/",
"available_models": [
{
"name": "gpt-4o-2024-05-13",
"max_tokens": 8000
}
]
}
},
"assistant": {
"default_model": {
"provider": "openai",
"model": "gpt-4o-2024-05-13"
},
"version": "2"
}
My deployment is called gpt-4o-2024-05-13 (it matches the model name) but I think the key is to set api_url with the entire API path including the deployment (https://<INSTANCE>.openai.azure.com/openai/deployments/<YOUR_DEPLOYMENT>). The limitation is obviously that we can't switch model when using the assistant.
Looks like it is possible to consume the Azure API directly. This is what I have:
"language_models": { "openai": { "version": "1", "api_url": "https://name.openai.azure.com/openai/deployments/gpt-4o-2024-05-13/", "available_models": [ { "name": "gpt-4o-2024-05-13", "max_tokens": 8000 } ] } }, "assistant": { "default_model": { "provider": "openai", "model": "gpt-4o-2024-05-13" }, "version": "2" }My deployment is called
gpt-4o-2024-05-13(it matches the model name) but I think the key is to setapi_urlwith the entire API path including the deployment (https://<INSTANCE>.openai.azure.com/openai/deployments/<YOUR_DEPLOYMENT>). The limitation is obviously that we can't switch model when using the assistant.
@jkfran does this configuration work for you? It seems that the "api_version" is not set.
@jkfran does this configuration work for you? It seems that the "api_version" is not set.
Good point! It works for me because I don't interact directly with Azure; I have a proxy service in between. The proxy doesn’t do much but ensures a default "api_version" is added to Azure requests if it’s missing. I didn't realise this.
Would it be possible to add a query string parameter to these requests in this config? If not then the best workaround is probably running LiteLLM like @phromo suggested.
This would be very interesting to have without a proxy - the API is mostly the same, it's just a matter of tacking on various bits to the URL
my proxying solution: https://github.com/zed-industries/zed/issues/4321#issuecomment-2759774507
+1
Did anyone try to use the new released feature OpenAI API Compatible with Azure OpenAI ?
I tried without success, so I opened this discussion: https://github.com/zed-industries/zed/discussions/35515
@fareshan It doesn't work, since you still cannot set the api-version, which is mandatory for calling Azure.
Wont work for me either. The enterprise llm instance needs a custom header set.