π― Azure community team updates
Hi there ππΌ
This is a follow up to https://github.com/vercel/ai/pull/8238#issuecomment-3339970839.
We tried to setup @vercel/ai-sdk-azure with outside collaborators, but unfortunately that is not working. So instead we can use this issue to collaborate on all things @ai-sdk/azure.
Our Azure Community Team is
| user | timezone |
|---|---|
| @gr2m | UTC-8 |
| @jephal | UTC+1 |
| @rahulbhadja | UTC-5 |
| @tommasoghisini | UTC+1 |
| @tsuzaki430 | UTC+9 |
Relevant issues and pull requests are labeled with https://github.com/vercel/ai/labels/provider%2Fazure
Monthly meeting: every 2nd Thursday at 16:30 UTC Next call: December 11th at 16:30 UTC / 8:30 PST / 17:30 CET / 11:30 ET (zoom link)
Question: should we make azure.responses the default instead of azure.chat? π¬ https://github.com/vercel/ai/issues/8596#issuecomment-3458458336 π
Thanks @gr2m for creating a way to communicate! Yes, let me start working on this issue. I also created one PR for an issue related to telemetry, and another PR for the AssemblyAI provider. Iβm waiting for your review!
I've been experimenting a bit with model router from azure and created #9966. Not somthing that needs to be urgently addressed but more documenting a few things about it (in case you dont want to keep issue open i can convert it to discussion).
https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/model-router
@jephal @rahulbhadja @tommasoghisini @tsuzaki430
-
As of today, we have 73 open issues/PRs for Azure: https://github.com/vercel/ai/labels/provider%2Fazure
I'd love get to 0 by the end of this year (except the ones that are not actionable, if any).
When you have a chance, could you skim through the list and share what you think we should prioritize?
-
If you are comfortable sharing: what timezone are you in? I'd like to start an "AI SDK + Azure" office hour, where I'd work on Azure issues and invite everyone to join to help out and socialize. I'd love to find the time that is convenient for everyone
Hi @gr2m, sure, I'm in the EST time zone. I'm pretty flexible with office hours and usually work most of the time, haha. Regarding issues related to Azure, most of them are old and already fixed, but I will start going through them again.
Heyo @gr2m sounds good! I'm in UTC +1.
Anything after 4pm my time, i should be able to join.
Ill also take a look at the list of issues/prs and try to see if i find something to prioritze or that we might be able to close
Helloπ Thank you messages.
UTC+9 , in Japan. However, I apologize. I am not very good at English, and I am afraid that attending the office hours would be a nuisance to everyone, so I would like to request not to participate. I am truly sorry. I would like to check the provider/azure issur/pr list. Thank you very much for your understanding.
Of course, no problem at all @tsuzaki430. You can still share your priorities in writing ahead of office hours, and we will look into them.
@jephal @rahulbhadja @tommasoghisini
time wise: would 17:00β18:00 UTC work? (12:00β13:00 @rahulbhadja, 18:00β19:00 @jephal/@tommasoghisini). I could also start 30-60min earlier, it would just make it a bit harder with my family logistics, but happy to do it if otherwise one of you couldn't attend.
Would you prefer Wednesday or Thursday?
I'd suggest a monthly check in starting next week
12.00-1:00 PM works for me. I'm available on both Wednesday and Thursday, thanks @gr2m
Would prefer thursday, but can also do wednesday.
If we could do 1 hr before that would be optimal, but I can make do with 18-19 :)
Thank you @gr2m !
same here :)
Okay great, let's meet every 2nd Thursday at 16:30-17:30 UTC. I'll update the issue description with a link
I went through all Open Azure issues and pull requests. They are mostly older and probably no longer relevant, but it would be good to go through and verify if the problems are still replicable.
In summary, I feel there like people are asking for better observability and a way to pass a model ID when a deploymend ID is passed to the ai methods, e.g. see https://github.com/vercel/ai/issues/7383
Yes, I saw many issues regarding observability and telemetry not passing the correct modelId. I also tried to solve the modelName and deploymentName issue #9497, but yes, definitely we can discuss more regarding this as well.
Community Meeting from November 13, 2025
Notes from yesterdays call with @jephal, @tommasoghisini, and @rahulbhadja
Intros
- Jeppe and Tommaso are coworkers at falck.com. They use the AI SDK and the Azure Provider for an internal chat app.
- Rahul is Co-founder of a startup using AI SDK with Azure-hosted models
Learnings
- Azure is significantly different from other providers, making it complex to set up and debug
- Azure frequently changes naming conventions and repackages their AI services
- Content filtering and API versioning present additional challenges
Priorities
- Code Interpreter File Downloads: Users can't easily download files generated by the Code Interpreter tool. We have an open pull request by @tsuzaki430 that needs to be updated: https://github.com/vercel/ai/pull/9135
- Telemetry Issues:
- short term: fix discrepancy between deployment ID and model ID in observability logs
- longer term: observability will be a priority for the AI SDK soon, we will incorporate requirements by Azure
- Documentation is out of date since
azure.responseswas made the default: https://github.com/vercel/ai/pull/10209
Next
- Azure Deep Research Integration: @tommasoghisini is exploring Azure's deep research setup with GPT-5 and grounding with Bing and will document findings to gauge community interest
- Interest in real-time models for live interactions, but these are not yet prioritized by AI SDK
Hello!Thank you for sharing the meeting results.
Intros
Thank you for telling me about everyone's activities.Learning about them made me feel energized.I am engaged in a small project developing web applications that can utilize Azure both internally and externally.
Learnings
Regarding the difficulty of Azure, I felt the same way.Error handling is something I personally find challenging. I once built an error UI for content filters, but it no longer works.I'm not sure of the cause, but it might be related to the API version.Regardless, I think Azure is a difficult provider.
Priorities
-
Understood regarding the Code Interpreter. Thank you for waiting.I have implemented the basic functionalities of Code Interpreter in #9135, but I had many concerns about the impact of the openai package and whether the changes, made solely for my convenience, would negatively affect others.However, since most of those major concerns have been resolved in main by other PRs, I think I can move forward.I might break it down into smaller PRs and recreate it.
-
I understand that the telemetry issue takes priority. I am very grateful that everyone is tackling the problem where the relationship between deployment ID and model ID is causing various negative impacts.
Next
- Since the Bing Search v7 API ended in August of this year, I had to discontinue the web search tool I developed myself.Azure Deep Research Integration would be very useful for me, and I believe others would be very interested as well.
- The real-time API is a very futuristic feature, and I'm very excited about its future.Currently, text-based communication remains important, so I understand the prioritization.
Hello , I created issue #10255 for CodeInterpreter output files. We can check the latest status for itπ
Hey guys, Azure added Anthropic models in Foundry source.
Do we have any plans to support this new addition? I checked the SDK structure, and it follows the same pattern as the Anthropic native SDK. @gr2m
I assume it could work similar to how we currently implement anthropic in the amazon-bedrock and google-vertex provider
Hello,
There have been several updates to the built-in tools in the Responses API. I have created new pull requests for the following workπ
-
web_search_preview is now available. #10370 This PR enables
webSearchPreviewin@ai-sdk/azure. https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/web-search?view=foundry-classic -
image_generation now supports streaming. #10391 This PR enables
imageGenerationonstreamTextin@ai-sdk/azure, and includes updates to documentation and tests. No changes to the package itself were required. https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/dall-e?view=foundry-classic&tabs=gpt-image-1#streaming
The example for Anthropic Claude APIs on Microsoft Foundry using@ai-sdk/anthropicπ
#10400
You're killing it @tsuzaki430, thank you!!
Iβve created two PRs related to CodeInterpreter.
-
#10252
-
#10266
If you have any feedback regarding the broader impact of these two PRs, please leave your comments in issue #10255. Iβd appreciate it if you could take a look when you have time.
Could someone else also review @tsuzaki430's latest pull request?
sorry been busy at work, will review it now @gr2m @tsuzaki430 #10266