Azure support
Someone in our Discord says Azure endpoints through OAI-Compat don't work, but routing through LiteLLM does work with the same exact endpoints. Need to look into this, it might just require adding Azure as its own endpoint to make sure everything is right.
Azure endpoints are not configureable thus the support is broken.
XXXXXX.services.ai.azure.com/api/models/chat/completions??api-version=2024-12-01-preview/chat/completions failed, reason: getaddrinfo ENOTFOUND XXXXXX.services.ai.azure.com",
I'm getting this error as well and it makes the Azure OpenAI option unusable.
XXXXXX.services.ai.azure.com/api/models/chat/completions??api-version=2024-12-01-preview/chat/completions failed, reason: getaddrinfo ENOTFOUND XXXXXX.services.ai.azure.com",
Sorry for the trouble, guys! I just updated Void, can you try again with the latest version (1.99.30031)? I think this change might fix it.
@andrewpareles I'm still getting the following error:
Full Error:
{
"cause": {
"message": "request to https://<my-resource-name>.services.ai.azure.com/api/models/chat/completions?api-version=2025-05-01/chat/completions failed, reason: getaddrinfo ENOTFOUND <my-resource-name>.services.ai.azure.com",
"type": "system",
"errno": "ENOTFOUND",
"code": "ENOTFOUND"
}
}
It looks like the URL is still getting /chat/completions appended incorrectly.
I note this documentation from the openai node library: https://github.com/openai/openai-node?tab=readme-ov-file#microsoft-azure-openai
Their documentation uses the DefaultCredential, but I believe you can use an AzureKeyCredential instead to use the API key.
We were trying to do things uniformly with new OpenAI(), but we should definitely just use that syntax. Thanks for the reference. If anyone wants to beat me to implementing this, it's a good first issue!
Initial progress at #577, can anyone checkout and test (or review the change)?
Initial progress at #577, can anyone checkout and test (or review the change)?
I just updated Void and checked, now there is a different error from before:
Error: Must provide one of the
baseURLorendpointarguments, or theAZURE_OPENAI_ENDPOINTenvironment variable
Thanks a lot for checking, will fix soon...
Initial progress at #577, can anyone checkout and test (or review the change)?
I just updated Void and checked, now there is a different error from before:
Error: Must provide one of the
baseURLorendpointarguments, or theAZURE_OPENAI_ENDPOINTenvironment variable
+1 -- all examples from Azure seem to have endpoint, deployment, apiVersion, and apiKey in the options. It works when I hardcode those 4 values into the AzureOpenAI constructor.
Deployment (the model) being required here would be an unfortunate break in the standardization of the providers. When you leave it out you get 404 Resource not Found (even if you specify the modelName as the deployment in the completions.create({ ... }).
I also use LibreChat - A really great open source. In its configuration I also use Azure credentials with the same input as in Void: instanceName, version & apiKey - And it works perfectly.
Maybe in their code there is an elegant solution for Azure (I don't know JS).
I also use LibreChat - A really great open source. In its configuration I also use Azure credentials with the same input as in Void: instanceName, version & apiKey - And it works perfectly.
Maybe in their code there is an elegant solution for Azure (I don't know JS).
https://github.com/danny-avila/LibreChat/blob/main/api/utils/azureUtils.js
If I'm reading this correctly, they replace the deployment with the model you're querying at time of LLM call.
@yuvalsmart - can you give my PR #595 a test see if it works for you?
@yuvalsmart - can you give my PR #595 a test see if it works for you?
@zpg6 Sorry, I don't know how to test it without updating Void and there is no new update available.
Will just push this soon and ask people if it works!
It works well now! I tested it with Azure o4-mini & gpt-4.1.