Nikolay Rovinskiy
Nikolay Rovinskiy
This error may be caused by missing capability host in the Azure AI Foundry. It can be added using azure CLI as follows: ```shell pip install -U azure-cli # Create...
> Does the NuGet now declare a new dependency on one or more tracing packages, even if I don't want to turn on tracing? If so, what is the total...
Currently the `GetChatCompletionsClient` call uses only Serverless connection type, we will fix it in the next release. Meanwhile, it is possible to use [GetAzureOpenAIChatClient](https://github.com/Azure/azure-sdk-for-net/blob/0773b6e78b4ceef978844d72a18a5303a3ae935d/sdk/openai/Azure.AI.OpenAI/src/Custom/AzureOpenAIExtensions.cs#L23C30-L23C54) as a mitigation. Please see the...
Yes, that may be a problem. OpenAI clients will need the Azure Open AI models. Here is a [PR](https://github.com/Azure/azure-sdk-for-net/pull/49118) to fix the issue, we expect it will be released in...
Hi, @GraemeMalcolm, coud you please try the latset version of projects and inference: ```bash dotnet add package Azure.AI.Projects --version 1.0.0-beta.9 dotnet add package Azure.AI.Inference --version 1.0.0-beta.5 ```
Is it a hub-based project or the new fondry-based or URI based project? I have Deployed the gpt-4o model to Azure AI Foundry and was able to get responses both...
Hub-based project used to always store its models in the Azure OpenAI resource. In the Foundry based project, it uses the default connection i.e. uses models, deployed to the foundry....
I have created the fix for the `GetAzureOpenAIChatClient` the Azure.AI.Projects 1.0.0-beta.10 and later should allow using connections to get the model deployed on Azure Open AI resources. ```C# ChatClient chatClient...
Yes, the changes will take effect in Azure.AI.Projects beta.10, which is planned for release this week.
The `1.0.0-beta.10` was released. @GraemeMalcolm could you please upgrade and try? ```shell dotnet add package Azure.AI.Projects --version 1.0.0-beta.10 ```