[BUG]: No serverless connection error on AIProjectClient.GetChatCompletionsClient
Library name and version
Azure.AI.Projects v1.0.0-beta.5
Describe the bug
projectClient.GetChatCompletionsClient() results in the following error:
No connections found for 'Serverless'. At least one connection is required. Please add a new connection in the Azure AI Foundry portal by following the instructions here: https://aka.ms/azsdk/azure-ai-projects/how-to/connections-add
The error does not occur when using 1.0.0-beta.3.
Expected behavior
Connection is found without error
Actual behavior
Error:
No connections found for 'Serverless'. At least one connection is required. Please add a new connection in the Azure AI Foundry portal by following the instructions here: https://aka.ms/azsdk/azure-ai-projects/how-to/connections-add
Reproduction Steps
Follow instructions at https://microsoftlearning.github.io/mslearn-ai-studio/Instructions/02a-AI-foundry-sdk.html but use
dotnet add package Azure.Identity
dotnet add package Azure.AI.Projects --prerelease
dotnet add package Azure.AI.Inference --prerelease
instead of
dotnet add package Azure.Identity
dotnet add package Azure.AI.Projects --version 1.0.0-beta.3
dotnet add package Azure.AI.Inference --version 1.0.0-beta.3
Environment
No response
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @jpalvarezl @ralph-msft @trrwilson.
Hello @GraemeMalcolm. I'm an AI assistant for the azure-sdk-for-net repository. I found a solution for your issue!
The error indicates that no Serverless connection is available when calling GetChatCompletionsClient(). In version 1.0.0-beta.5 the SDK now enforces that at least one connection exists. To resolve this issue:
- Ensure you have added a Serverless connection to your Azure AI Foundry project via the portal. Follow the instructions at how-to add connections.
- Alternatively, if the previous behavior in beta 1.0.0-beta.3 is required, consider downgrading the package until this regression is addressed.
This should allow the client to locate the connection and proceed without error.
This should solve your problem, if it does not feel free to reopen the issue.
@nick863 do you mind taking a look?
@GraemeMalcolm can you please print out the results of listing all connections in the above AI Foundry project and share with us? (run ConnectionsClient.GetConnections Method)
Here are the connection details I get:
ai-aihub911642661207_aoai
Target: https://ai-aihub911642661207.openai.azure.com/
AuthType: ApiKey
ai-aihub911642661207
Target: https://ai-aihub911642661207.cognitiveservices.azure.com/
AuthType: ApiKey
This may be related to the following error in the Foundry portal, which started occurring a week or so ago whenever you deploy a model (confirmed by multiple colleagues). The model deploys OK and can be used, but this error message appears immediately after deployment.
Currently the GetChatCompletionsClient call uses only Serverless connection type, we will fix it in the next release. Meanwhile, it is possible to use GetAzureOpenAIChatClient as a mitigation. Please see the example here.
Does the OpenAI Chat Completions client work with non-OpenAI models? The lab uses a Phi-4 model.
Yes, that may be a problem. OpenAI clients will need the Azure Open AI models. Here is a PR to fix the issue, we expect it will be released in the next version of Azure.AI.Projects for Azure AI Services, authenticated by API key and after the new release of Azure.AI.Inference the Entra ID-authenticated resources will be supported.
Hi @GraemeMalcolm, since you haven’t asked that we /unresolve the issue, we’ll close this out. If you believe further discussion is needed, please add a comment /unresolve to reopen the issue.
@nick863 This issue still present when using Azure.AI.Projects 1.0.0-beta.6 and Azure.AI.Inference 1.0.0-beta.4
The UI bug in the portal has been resolved, so I assume that's unrelated.
i can concur that beta.6 still has cannot find serverless connection error ...
@dargilco @nick863
The issue is still present in 1.0.0-beta-8
Can you please /unresolve until a fix is in-place? Or confirm that this issue won't be fixed?
Hi, @GraemeMalcolm, coud you please try the latset version of projects and inference:
dotnet add package Azure.AI.Projects --version 1.0.0-beta.9
dotnet add package Azure.AI.Inference --version 1.0.0-beta.5
Hi @GraemeMalcolm. Thank you for opening this issue and giving us the opportunity to assist. To help our team better understand your issue and the details of your scenario please provide a response to the question asked above or the information requested above. This will help us more accurately address your issue.
Sorry for the delayed response (I was on vacation) The issue still exists in Azure.AI.Projects --version 1.0.0-beta.9.
To be clear, what I'm seeing is inconsistent behavior between the Python and .NET SDKs (and I believe the Python SDK is behaving correctly based on my understanding of the intent).
When using the Python SDK, you can create an AI Foundry project and deploy a GPT model to the Azure AI Foundry models (formerly Azure AI Model Inference) endpoint (which is now the default target for any "Flagship" model -including both OpenAI and serverless models), use the AI Foundry Project endpoint in your Python code to connect to the project, and then use either project_client.inference.get_chat_completions_client() to get an Azure AI Model Inference client object or use project_client.inference.get_azure_openai_client to get an OpenAI chat client object. You can then use the client object to chat with the model. Note that in the second case, you're connecting to an Azure AI Model Inference endpoint, but using the OpenAI client.
When using the .NET SDK, the projectClient.GetAzureOpenAIChatClient method doesn't work - it can't find the model in the workspace (presumably because it's assuming the GPT model is deployed to an OpenAI endpoint, not an Azure AI Model Inference endpoint). It only works if you deploy the GPT model to a non-default, Azure OpenAI-specific endpoint (and specify that endpoint in the call). If you try to use it with a model in the default Azure AI Foundry project endpoint, you get this error:
Connection gpt-4o can't be found in this workspace
Status: 404 (Connection gpt-4o can't be found in this workspace)
ErrorCode: UserError
Content:
{
"error": {
"code": "UserError",
"severity": null,
"message": "Connection gpt-4o can't be found in this workspace",
"messageFormat": null,
"messageParameters": null,
"referenceCode": null,
"detailsUri": null,
"target": null,
"details": [],
"innerError": {
"code": "NotFoundError",
"innerError": null
},
"debugInfo": null,
"additionalInfo": null
},
"correlation": {
"operation": "58091d8369cc56080365a71a96070797",
"request": "e610be106b768fec"
},
"environment": "eastus2",
"location": "eastus2",
"time": "2025-06-11T13:50:38.5972986+00:00",
"componentName": "account-rp",
"statusCode": 404
}
Headers:
Vary: REDACTED
Request-Context: REDACTED
x-ms-response-type: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
azureml-served-by-cluster: REDACTED
x-request-time: REDACTED
apim-request-id: REDACTED
x-ms-region: REDACTED
Date: Wed, 11 Jun 2025 13:50:37 GMT
Content-Length: 682
Content-Type: application/json; charset=utf-8
Take a look at https://learn.microsoft.com/en-us/training/modules/ai-foundry-sdk/04-chat-client?pivots=python to see what we assume is the intent here - i.e. you can deploy a GPT model to the default AI Foundry project endpoint (which is an Azure AI Model Inference endpoint) but use the AI Project library to get an OpenAI client object and then use existing OpenAI-based code to chat with it.
In previous versions of the .NET SDK (1.0.0-beta.3) there was consistent behavior in C#; but now that no longer works (so we had to remove the OpenAI example from the C# pivot in the Learn module page at https://learn.microsoft.com/en-us/training/modules/ai-foundry-sdk/04-chat-client?pivots=csharp and remove the OpenAI option from the lab exercise.)
Is it a hub-based project or the new fondry-based or URI based project?
I have Deployed the gpt-4o model to Azure AI Foundry and was able to get responses both using GetChatCompletionsClient
ChatCompletionsClient chatClient = client.GetChatCompletionsClient();
var requestOptions = new ChatCompletionsOptions()
{
Messages =
{
new ChatRequestSystemMessage("You are a helpful assistant."),
new ChatRequestUserMessage("How many feet are in a mile?"),
},
Model = modelDeploymentName
};
Response<ChatCompletions> response = chatClient.Complete(requestOptions);
and GetAzureOpenAIChatClient
ChatClient chatClient = projectClient.GetAzureOpenAIChatClient(deploymentName: modelDeploymentName, connectionName: null, apiVersion: null);
ChatCompletion result = chatClient.CompleteChat("Tell me a joke.");
These models deployed to the Azure Open AI resource.
Hi @GraemeMalcolm. Thank you for opening this issue and giving us the opportunity to assist. To help our team better understand your issue and the details of your scenario please provide a response to the question asked above or the information requested above. This will help us more accurately address your issue.
It works in a Foundry-based project for Python, but not .NET I've not tried a hub-based project (will try tomorrow); but to be honest, that's not really a solution - it should work with GPT models in a Foundry models endpoint regardless of project type - no?
From: Nikolay Rovinskiy @.> Sent: Thursday, June 12, 2025 16:56 To: Azure/azure-sdk-for-net @.> Cc: Author @.>; Comment @.> Subject: Re: [Azure/azure-sdk-for-net] [BUG]: No serverless connection error on AIProjectClient.GetChatCompletionsClient (Issue #49064)
Is it a hub-based project or the new fondry-based or URI based project? I have Deployed the gpt-4o model to Azure AI Foundry and was able to get responses both using GetChatCompletionsClient
ChatCompletionsClient chatClient = client.GetChatCompletionsClient();
var requestOptions = new ChatCompletionsOptions() { Messages = { new ChatRequestSystemMessage("You are a helpful assistant."), new ChatRequestUserMessage("How many feet are in a mile?"), }, Model = modelDeploymentName }; Response<ChatCompletions> response = chatClient.Complete(requestOptions);
and GetAzureOpenAIChatClient
ChatClient chatClient = projectClient.GetAzureOpenAIChatClient(deploymentName: modelDeploymentName, connectionName: null, apiVersion: null);
ChatCompletion result = chatClient.CompleteChat("Tell me a joke.");
These models deployed to the Azure Open AI resource.
— Reply to this email directly, view it on GitHubhttps://github.com/Azure/azure-sdk-for-net/issues/49064#issuecomment-2968130928 or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACZZZ7S2LE7ADUUZSIKXB633DHSOXBFKMF2HI4TJMJ2XIZLTTKBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJDUOJ2WLJDOMFWWLLTXMF2GG2C7MFRXI2LWNF2HTAVFOZQWY5LFUVUXG43VMWSG4YLNMWVXI2DSMVQWIX3UPFYGLAVFOZQWY5LFVIYTANRZGQZTINZQG6SG4YLNMWUWQYLTL5WGCYTFNSBKK5TBNR2WLKRRGE3TQNBUG44DMMFENZQW2ZNJNBQXGX3MMFRGK3ECUV3GC3DVMWVDCMRUGAZTGMZXGU3KI3TBNVS2S2DBONPWYYLCMVWIFJLWMFWHKZNKGEZTKMZTGE3DCOBTURXGC3LFVFUGC427NRQWEZLMQKSXMYLMOVS2UMJTGY4TMOBXHE2DLJDOMFWWLKLIMFZV63DBMJSWZAVFOZQWY5LFVIYTSMZWHE2TMNRTHCSG4YLNMWUWQYLTL5WGCYTFNSBKK5TBNR2WLKRXG42TINZTGQ4DINVENZQW2ZNJNBQXGX3MMFRGK3FMON2WE2TFMN2F65DZOBS2YSLTON2WKQ3PNVWWK3TUUZ2G64DJMNZZTAVEOR4XAZNKOJSXA33TNF2G64TZUV3GC3DVMWTTEOJSHA4TINECUR2HS4DFUVUXG43VMWSXMYLMOVS2UMRZGQ2DINRRHAZDTAVEOR4XAZNFNRQWEZLMUV3GC3DVMWVDCMBWHE2DGNBXGA3YFJDUPFYGLJLMMFRGK3FFOZQWY5LFVIYTCNZYGQ2DOOBWGCBKI5DZOBS2K3DBMJSWZJLWMFWHKZNKGEZDIMBTGMZTONJWQKSHI6LQMWSWYYLCMVWKK5TBNR2WLKRRGM2TGMZRGYYTQM4CUR2HS4DFUVWGCYTFNSSXMYLMOVS2UMJTGY4TMOBXHE2DLAVEOR4XAZNFNRQWEZLMUV3GC3DVMWVDCOJTGY4TKNRWGM4IFJDUPFYGLJLMMFRGK3FFOZQWY5LFVI3TONJUG4ZTIOBUG2TXI4TJM5TWK4VGMNZGKYLUMU. You are receiving this email because you authored the thread.
Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Hub-based project used to always store its models in the Azure OpenAI resource. In the Foundry based project, it uses the default connection i.e. uses models, deployed to the foundry. I will work with the team to clarify, how we can use the models deployed to Azure OpenAI resource.
The problem is not with models deployed to an OpenAI endpoint. It's with using the OpenAI SDK to chat with OpenAI models deployed to the default Foundry Models endpoint.
Cheers, G
Tiny keyboard. Chubby fingers. Sorry!
From: Nikolay Rovinskiy @.> Sent: Monday, June 23, 2025 6:07:31 PM To: Azure/azure-sdk-for-net @.> Cc: Author @.>; Comment @.> Subject: Re: [Azure/azure-sdk-for-net] [BUG]: No serverless connection error on AIProjectClient.GetChatCompletionsClient (Issue #49064)
Hub-based project used to always store its models in the Azure OpenAI resource. In the Foundry based project, it uses the default connection i.e. uses models, deployed to the foundry. I will work with the team to clarify, how we can use the models deployed to Azure OpenAI resource.
— Reply to this email directly, view it on GitHubhttps://github.com/Azure/azure-sdk-for-net/issues/49064#issuecomment-2998437344 or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACZZZ7V6ZQCKB4KCF6EBLMD3FCQFLBFKMF2HI4TJMJ2XIZLTTGBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJDUOJ2WLJDOMFWWLLTXMF2GG2C7MFRXI2LWNF2HTAVFOZQWY5LFUVUXG43VMWSG4YLNMWVXI2DSMVQWIX3UPFYGLAVFOZQWY5LFVIYTANRZGQZTINZQG6SG4YLNMWUWQYLTL5WGCYTFNSBKK5TBNR2WLKRRGE3TQNBUG44DMMFENZQW2ZNJNBQXGX3MMFRGK3ECUV3GC3DVMWVDCMZVGMZTCNRRHAZ2I3TBNVS2S2DBONPWYYLCMVWIFJLWMFWHKZNKGEZTMOJWHA3TSNBVURXGC3LFVFUGC427NRQWEZLMQKSXMYLMOVS2UMJZGM3DSNJWGYZTRJDOMFWWLKLIMFZV63DBMJSWZAVFOZQWY5LFVI3TONJUG4ZTIOBUG2SG4YLNMWUWQYLTL5WGCYTFNSWHG5LCNJSWG5C7OR4XAZNMJFZXG5LFINXW23LFNZ2KM5DPOBUWG44YQKSHI6LQMWVHEZLQN5ZWS5DPOJ42K5TBNR2WLJZSHEZDQOJUGSBKI5DZOBS2K2LTON2WLJLWMFWHKZNKGI4TINBUGYYTQMRZQKSHI6LQMWSWYYLCMVWKK5TBNR2WLKRRGA3DSNBTGQ3TAN4CUR2HS4DFUVWGCYTFNSSXMYLMOVS2UMJRG44DINBXHA3DBAVEOR4XAZNFNRQWEZLMUV3GC3DVMWVDCMZVGMZTCNRRHAZYFJDUPFYGLJLMMFRGK3FFOZQWY5LFVIYTGNRZGY4DOOJUGWBKI5DZOBS2K3DBMJSWZJLWMFWHKZNKGE4TGNRZGU3DMMZYQKSHI6LQMWSWYYLCMVWKK5TBNR2WLKRXG42TINZTGQ4DINVHORZGSZ3HMVZKMY3SMVQXIZI. You are receiving this email because you authored the thread.
Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
To repro:
- Follow the instructions at https://microsoftlearning.github.io/mslearn-ai-studio/Instructions/02a-AI-foundry-sdk.html to create an AI Foundry project, deploy a gpt-4o model (which by default is deployed to an Azure AI Model Inference endpoint), and create a client app in Python or C# to chat with it (see code below)
- Modify the client app to use the OpenAI API (see code below). You need to use
pip installordotnet addto add the Azure OpenAI library package. Use the same endpoint and model name environment variables / project settings - just change the code and import the OpenAI library. - The Python version works as expected. The C# version produces the following error:
Connection gpt-4o can't be found in this workspace
Status: 404 (Connection gpt-4o can't be found in this workspace)
ErrorCode: UserError
Content:
{
"error": {
"code": "UserError",
"severity": null,
"message": "Connection gpt-4o can't be found in this workspace",
"messageFormat": null,
"messageParameters": null,
"referenceCode": null,
"detailsUri": null,
"target": null,
"details": [],
"innerError": {
"code": "NotFoundError",
"innerError": null
},
"debugInfo": null,
"additionalInfo": null
},
"correlation": {
"operation": "e60202e08782d4f74f59b9150c556a8d",
"request": "b153f54f86724721"
},
"environment": "eastus2",
"location": "eastus2",
"time": "2025-06-24T16:57:19.8790236+00:00",
"componentName": "account-rp",
"statusCode": 404
}
Headers:
Vary: REDACTED
Request-Context: REDACTED
x-ms-response-type: REDACTED
Strict-Transport-Security: REDACTED
X-Content-Type-Options: REDACTED
azureml-served-by-cluster: REDACTED
x-request-time: REDACTED
apim-request-id: REDACTED
x-ms-region: REDACTED
Date: Tue, 24 Jun 2025 16:57:19 GMT
Content-Length: 682
Content-Type: application/json; charset=utf-8
Code
Python
AI Inference API
## Azure AI Inference - Python
import os
# Add references
from dotenv import load_dotenv
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from azure.ai.inference.models import SystemMessage, UserMessage, AssistantMessage
def main():
# Clear the console
os.system('cls' if os.name=='nt' else 'clear')
try:
# Get configuration settings
load_dotenv()
project_connection = os.getenv("PROJECT_ENDPOINT")
model_deployment = os.getenv("MODEL_DEPLOYMENT")
# Initialize the project client
projectClient = AIProjectClient(
credential=DefaultAzureCredential(
exclude_environment_credential=True,
exclude_managed_identity_credential=True
),
endpoint=project_connection,
)
# Get a chat client
chat = projectClient.inference.get_chat_completions_client()
# Initialize prompt with system message
prompt=[
SystemMessage("You are a helpful AI assistant that answers questions.")
]
# Loop until the user types 'quit'
while True:
# Get input text
input_text = input("Enter the prompt (or type 'quit' to exit): ")
if input_text.lower() == "quit":
break
if len(input_text) == 0:
print("Please enter a prompt.")
continue
# Get a chat completion
prompt.append(UserMessage(input_text))
response = chat.complete(
model=model_deployment,
messages=prompt)
completion = response.choices[0].message.content
print(completion)
prompt.append(AssistantMessage(completion))
except Exception as ex:
print(ex)
if __name__ == '__main__':
main()
OpenAI API
# OpenAI API with AI Foundry - Python
import os
from dotenv import load_dotenv
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from openai import AzureOpenAI
def main():
# Clear the console
os.system('cls' if os.name=='nt' else 'clear')
try:
# Get configuration settings
load_dotenv()
project_connection = os.getenv("PROJECT_ENDPOINT")
model_deployment = os.getenv("MODEL_DEPLOYMENT")
# Initialize the project client
projectClient = AIProjectClient(
credential=DefaultAzureCredential(
exclude_environment_credential=True,
exclude_managed_identity_credential=True
),
endpoint=project_connection,
)
## Get a chat client
openai_client = projectClient.inference.get_azure_openai_client(api_version="2024-10-21")
# Initialize prompt with system message
prompt=[
{"role": "system", "content": "You are a helpful AI assistant that answers questions."}
]
# Loop until the user types 'quit'
while True:
# Get input text
input_text = input("Enter the prompt (or type 'quit' to exit): ")
if input_text.lower() == "quit":
break
if len(input_text) == 0:
print("Please enter a prompt.")
continue
# Get a chat completion
prompt.append({"role": "user", "content": input_text})
response = openai_client.chat.completions.create(
model=model_deployment,
messages=prompt)
completion = response.choices[0].message.content
print(completion)
prompt.append({"role": "assistant", "content": completion})
except Exception as ex:
print(ex)
if __name__ == '__main__':
main()
C#
AI Inference API
// Azure AI Inference - C#
using System;
using Azure;
using System.IO;
using System.Text;
using System.Collections.Generic;
using Microsoft.Extensions.Configuration;
// Add references
using Azure.Identity;
using Azure.AI.Projects;
using Azure.AI.Inference;
namespace chat_app
{
class Program
{
static void Main(string[] args)
{
// Clear the console
Console.Clear();
try
{
// Get config settings
IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
IConfigurationRoot configuration = builder.Build();
string project_connection = configuration["PROJECT_ENDPOINT"];
string model_deployment = configuration["MODEL_DEPLOYMENT"];
// Initialize the project client
DefaultAzureCredentialOptions options = new()
{ ExcludeEnvironmentCredential = true,
ExcludeManagedIdentityCredential = true };
var projectClient = new AIProjectClient(
new Uri(project_connection),
new DefaultAzureCredential(options));
// Get a chat client
ChatCompletionsClient chat = projectClient.GetChatCompletionsClient();
// Initialize prompt with system message
var prompt = new List<ChatRequestMessage>(){
new ChatRequestSystemMessage("You are a helpful AI assistant that answers questions.")
};
// Loop until the user types 'quit'
string input_text = "";
while (input_text.ToLower() != "quit")
{
// Get user input
Console.WriteLine("Enter the prompt (or type 'quit' to exit):");
input_text = Console.ReadLine();
if (input_text.ToLower() != "quit")
{
// Get a chat completion
prompt.Add(new ChatRequestUserMessage(input_text));
var requestOptions = new ChatCompletionsOptions()
{
Model = model_deployment,
Messages = prompt
};
Response<ChatCompletions> response = chat.Complete(requestOptions);
var completion = response.Value.Content;
Console.WriteLine(completion);
prompt.Add(new ChatRequestAssistantMessage(completion));
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
OpenAI API (fails with error)
// OpenAI API - C#
using System;
using Azure;
using System.IO;
using System.Text;
using System.Collections.Generic;
using Microsoft.Extensions.Configuration;
using Azure.Identity;
using Azure.AI.Projects;
using OpenAI.Chat;
using Azure.AI.OpenAI;
namespace chat_app
{
class Program
{
static void Main(string[] args)
{
// Clear the console
Console.Clear();
try
{
// Get config settings
IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
IConfigurationRoot configuration = builder.Build();
string project_connection = configuration["PROJECT_ENDPOINT"];
string model_deployment = configuration["MODEL_DEPLOYMENT"];
// Initialize the project client
DefaultAzureCredentialOptions options = new()
{ ExcludeEnvironmentCredential = true,
ExcludeManagedIdentityCredential = true };
var projectClient = new AIProjectClient(
new Uri(project_connection),
new DefaultAzureCredential(options));
// Get a chat client
ChatClient openaiClient = projectClient.GetAzureOpenAIChatClient(model_deployment);
// Initialize prompt with system message
var prompt = new List<ChatMessage>(){
new SystemChatMessage("You are a helpful AI assistant that answers questions.")
};
// Loop until the user types 'quit'
string input_text = "";
while (input_text.ToLower() != "quit")
{
// Get user input
Console.WriteLine("Enter the prompt (or type 'quit' to exit):");
input_text = Console.ReadLine();
if (input_text.ToLower() != "quit")
{
// Get a chat completion
prompt.Add(new UserChatMessage(input_text));
ChatCompletion completion = openaiClient.CompleteChat(prompt);
var completionText = completion.Content[0].Text;
Console.WriteLine(completionText);
prompt.Add(new AssistantChatMessage(completionText));
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
I have created the fix for the GetAzureOpenAIChatClient the Azure.AI.Projects 1.0.0-beta.10 and later should allow using connections to get the model deployed on Azure Open AI resources.
ChatClient chatClient = projectClient.GetAzureOpenAIChatClient(deploymentName: modelDeploymentName, connectionName: "AOAI_connection_name", apiVersion: null);
To use Chat completion client with the custom connection, it will need to be created from the constructor. This behavior was changed since the version 1.0.0-beta.9 and the same changes were reflected in new python packages.
In the new package both projectClient.GetChatCompletionsClient() and projectClient.GetAzureOpenAIChatClient() look for model deployment in Azure AI Foundry, however GetAzureOpenAIChatClient have the connection parameter to override it behavior.
New samples in python: GetChatCompletionsClient GetAzureOpenAIChatClient In C# GetChatCompletionsClient GetAzureOpenAIChatClient
OK, think I've figured this out. You can use either endpoint (Azure AI Foundry project or Azure OpenAI) to get a project client, but you must explicitly specify the connection parameter in the GetAzureOpenAIChatClient as null for it to find the model in the default project endpoint. So:
ChatClient openaiClient = projectClient.GetAzureOpenAIChatClient(deploymentName: model_deployment, connectionName: null, apiVersion: "2024-10-21");
but not:
ChatClient openaiClient = projectClient.GetAzureOpenAIChatClient(model_deployment);
This then enables you to use the OpenAI SDK to chat with any model (GPT or otherwise, for example, Phi)
That seem right? If so, you can go ahead and close this bug
Yes, the changes will take effect in Azure.AI.Projects beta.10, which is planned for release this week.
The 1.0.0-beta.10 was released. @GraemeMalcolm could you please upgrade and try?
dotnet add package Azure.AI.Projects --version 1.0.0-beta.10
Hi @GraemeMalcolm. Thank you for opening this issue and giving us the opportunity to assist. We believe that this has been addressed. If you feel that further discussion is needed, please add a comment with the text "/unresolve" to remove the "issue-addressed" label and continue the conversation.
Yup - resolved as long as you use explicit parameters - many thanks