Support Azure AI Model Inference API provider
Feature Description
Today I deployed the Llama 3.1 model using Azure AI Studio and got it ready to use through the AI SDK. I originally thought its API was compatible with OpenAI, but it didn't work out when I tried. From the documentation, I found out it uses the Azure AI Model Inference API, so does that mean we need a separate provider?
@wong2 yes this might require a new provider, like azure-ai-studio
This need has come up again because we want to use DeepSeek R1 on Azure.
Agreed we would really benefit from a non-OpenAI Azure provider so we can use Azure DeepSeek!
+1 for an inference api provider!
yes please!
I made the following code and it works for me.
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
const API_VERSION = process.env.AZURE_API_VERSION
const AZURE_API_KEY = process.env.AZURE_API_KEY
const AZURE_RESOURCE_NAME = process.env.AZURE_RESOURCE_NAME
const azureInf = createOpenAICompatible({
name: 'azure-inf',
baseURL: `https://${AZURE_RESOURCE_NAME}.services.ai.azure.com/models`,
queryParams: {
'api-version': API_VERSION,
},
fetch: async (url, request) => {
return await fetch(url, { ...request, headers: {...request?.headers, 'api-key': AZURE_API_KEY} });
},
});
.......
// Then you can use here
'chat-model-reasoning': wrapLanguageModel({
model: azureInf('DeepSeek-R1'),
middleware: extractReasoningMiddleware({ tagName: 'think' }),
}),