.Net: IChatClient + Add FunctionChoiceBehavior to Google Connector
Describe the bug kernel function call is not working properly with gemini-1.5-flash for below code
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.Google;
using SementicKernelDemo.Plugins.LightsPlugin;
string? modelID = Environment.GetEnvironmentVariable("MODEL_ID");
string? apiKey = Environment.GetEnvironmentVariable("API_KEY");
if (string.IsNullOrEmpty(modelID) || string.IsNullOrEmpty(apiKey))
throw new Exception("Please set the MODEL_ID and API_KEY environment variables to your Google AI Gemini model ID and API key, respectively.");
// create kernel
#pragma warning disable SKEXP0070
var kernel = Kernel.CreateBuilder()
.AddGoogleAIGeminiChatCompletion(modelID, apiKey, GoogleAIVersion.V1)
.Build();
// Add plugins
kernel.Plugins.AddFromType<LightsPlugin>("Lights");
//Enable Prompt Execution Settings
GeminiPromptExecutionSettings geminiPromptExecutionSettings = new() {
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
ToolCallBehavior = GeminiToolCallBehavior.EnableKernelFunctions
};
var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
string? userInput;
do
{
Console.WriteLine("User >");
userInput = Console.ReadLine();
try
{
history.AddUserMessage(userInput);
var result = await chatCompletionService.GetChatMessageContentAsync(
history,
executionSettings: geminiPromptExecutionSettings,
kernel: kernel);
Console.WriteLine("Assistant > " + result);
history.AddMessage(result.Role, result.Content ?? string.Empty);
}
catch (Exception ex)
{
Console.WriteLine("Error: " + ex.Message);
}
} while (userInput is not null);
#pragma warning restore SKEXP0070
To Reproduce Steps to reproduce the behavior: Execute the above code its failing when ToolCallBehavior = GeminiToolCallBehavior.EnableKernelFunctions or ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions is set. if i don't set it does not recognize kernel functions.
Expected behavior It should recognize kernel functions and return the result back from my plugin
Screenshots
Platform
- OS: Windows
- IDE: Visual Studio
- Language: C#
- Source: microsoft.sementic.kernel and connector.google
Additional context
I am also facing the same issue with gemini-1.5-flash, it doesn't interact with the kernel function.
change GeminiPromptExecutionSettings to this
GeminiPromptExecutionSettings geminiAIPromptExecutionSettings = new()
{
ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions
};
@thevivekm I changed to gemini-2.0-flash-exp model and then adding message to history if content is available. Otherwise next time onward LLM is giving 400 bad request.
` Console.WriteLine("Assistant > " + result);
if (!string.IsNullOrEmpty(result?.Content)) history.AddMessage(result.Role, result.Content ?? string.Empty); `
Now i am facing a different problem, prompts are calling native function but some how LLM is not recognizing them. I don't want to create separate native function. I was hoping when LLM executed native function it will understand the output.
here is my skprompt.txt and config.json
This is a list of songs {{Songs.get_list_of_songs}} and add them into chat history if not already available. This is a recently played songs {{User.user_recently_played_songs}} and add them into chat history if not already available.
You are an AI assistant. The assistant is helpful, creative, clever, and very friendly. Suggest a new song to user based on list of songs but not recently played. while responding to user, you can use the following format: "Hey, I have a song suggestion for you. How about listening to {title} by {artist}?" If Suggested song must be from the available songs list. If no available list of song present, Please inform client. If all songs are in recently played list then select random song from available song list.
{ "schema": 1, "type": "completion", "description": "Suggest a new song", "execution_settings": { "default": { "max_tokens": 200, "temperature": 0.7 } } }
I am new in Microsoft Semantic kernel and learning concepts.
I am facing this same issue. Working for me with open ai, but not with Gemini.
#pragma warning disable SKEXP0070
semanticKernelExecutionSettiongs = new Microsoft.SemanticKernel.Connectors.Google.GeminiPromptExecutionSettings()
{
ModelId = model,
#pragma warning disable SKEXP0001
FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions,
};
these are my execution settings. it seems like the LLM knows the name of the tools and it is telling me that it is going to execute them. But it completes before using the tool.
i have tried with Flash 2.0, 1.5 Pro SemanticKernal Google Connecter Version: 1.36.1-alpha
services.AddGoogleAIGeminiChatCompletion(
modelId: modelString,
apiKey: Globals.Secrets?.GetValueOrDefault("gemini_key"),
GoogleAIVersion.V1_Beta,
serviceId: modelType.ToString()
);
ive also had to enable the V1_Beta because i would get an http error and nothing would work if it wasnt on the beta
This is the error you get for Python, but its the same issue (I've removed some lines to increase readability):
in get_chat_message_contents
completions = await self._inner_get_chat_message_contents(chat_history, settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
line 112, in wrapper_decorator
return await completion_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
line 137, in _inner_get_chat_message_contents
response: AsyncGenerateContentResponse = await model.generate_content_async(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
line 385, in generate_content_async
response = await self._async_client.generate_content(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
line 440, in generate_content
response = await rpc(
^^^^^^^^^^
line 230, in retry_wrapped_func
return await retry_target(
^^^^^^^^^^^^^^^^^^^
line 160, in retry_target
_retry_error_helper(
line 212, in _retry_error_helper
raise final_exc from source_exc
line 155, in retry_target
return await target()
^^^^^^^^^^^^^^
File "/Users/alec/GitHub/SemanticKernelFunTimesPython/.venv/lib/python3.12/site-packages/google/api_core/grpc_helpers_async.py", line 88, in __await__
raise exceptions.from_grpc_error(rpc_error) from rpc_error
google.api_core.exceptions.InvalidArgument: 400 * GenerateContentRequest.tools[0].function_declarations[1].parameters.properties: should be non-empty for OBJECT type
I also encountered the same problem. I tried lowering the version to 1.33 and it works fine now
I am facing this same issue. Working for me with open ai, but not with Gemini.我面临同样的问题。使用 open ai 为我工作,但不为 Gemini 工作。
#pragma warning disable SKEXP0070 semanticKernelExecutionSettiongs = new Microsoft.SemanticKernel.Connectors.Google.GeminiPromptExecutionSettings() { ModelId = model, #pragma warning disable SKEXP0001 FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(), ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions, };these are my execution settings. it seems like the LLM knows the name of the tools and it is telling me that it is going to execute them. But it completes before using the tool.这些是我的执行设置。它似乎LLM知道工具的名称,它告诉我它将执行它们。但它在使用工具之前完成。
i have tried with Flash 2.0, 1.5 Pro我试过 Flash 2.0、1.5 Pro SemanticKernal Google Connecter Version: 1.36.1-alphaSemanticKernal Google Connecter 版本:1.36.1-alpha
services.AddGoogleAIGeminiChatCompletion( modelId: modelString, apiKey: Globals.Secrets?.GetValueOrDefault("gemini_key"), GoogleAIVersion.V1_Beta, serviceId: modelType.ToString() );ive also had to enable the V1_Beta because i would get an http error and nothing would work if it wasnt on the beta我还必须启用 V1_Beta,因为我会收到 HTTP 错误,如果它不在测试版上,什么都不会工作
I also encountered the same problem. I tried lowering the version to 1.33 and it works fine now
tered the same problem. I tried lowering the version to 1.33 an
sometimes model does not return any response, if response is null or empty don't add that into history. This would solve issues. Also try to use max_token.
I tried with gemini-2.0-flash.
In my case the problem was the get_state function. Probably not correctly Annotated or too less arguments + description. After applying the following change it worked:
diff --git a/promptflow_tool_semantic_kernel/tools/lights_plugin.py b/promptflow_tool_semantic_kernel/tools/lights_plugin.py
index 4c81834..dcbf9c4 100644
--- a/promptflow_tool_semantic_kernel/tools/lights_plugin.py
+++ b/promptflow_tool_semantic_kernel/tools/lights_plugin.py
@@ -25,9 +25,25 @@ class LightsPlugin:
name="get_lights",
description="Gets a list of lights and their current state",
)
- def get_state(self, ) -> str:
- """Gets a list of lights and their current state."""
- return self.lights
+ def get_state(
+ self, id: Annotated[int, "The ID of the light to get its state"],
+ all: Annotated[bool, "Flag to get all lights"]
+ ) -> Annotated[list[dict], "A list of lights with their properties"]:
+ """
+ Gets a list of lights and their current state.
+
+ Returns:
+ list[dict]: A list of dictionaries where each dictionary represents a light
+ with properties such as id, name, and is_on state.
+ """
+ if id:
+ for light in self.lights:
+ if light["id"] == id:
+ return [light]
+ return []
+ if all:
+ return self.lights
+ return []
@kernel_function(
name="change_state",
@@ -35,8 +51,8 @@ class LightsPlugin:
)
def change_state(
self,
- id: int,
- is_on: bool,
+ id: Annotated[int, "The ID of the light to change"],
+ is_on: Annotated[bool, "Whether to turn the light on or off"],
) -> str:
"""Changes the state of the light."""
for light in self.lights:
for C# version of SK :
the "generic" var promptExecutionSettings = new PromptExecutionSettings(); promptExecutionSettings.FunctionChoiceBehavior = FunctionChoiceBehavior.Auto();
is not mapped correctly to the GeminiPromptExecutionSettings in the gemini connector if you are using gemini you must create a specific GeminiPromptExecutionSettings
var geminiPromptExecutionSettings = new GeminiPromptExecutionSettings(); geminiPromptExecutionSettings.ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions;
public static GeminiPromptExecutionSettings FromExecutionSettings(PromptExecutionSettings? executionSettings)
{
switch (executionSettings)
{
case null:
return new GeminiPromptExecutionSettings() { MaxTokens = DefaultTextMaxTokens };
case GeminiPromptExecutionSettings settings:
return settings;
}
// auto invoke functions is not mapped in this "flaky" implementation since FunctionChoiceBehavior propertyname != ToolCallBehavior propertyname
var json = JsonSerializer.Serialize(executionSettings);
return JsonSerializer.Deserialize<GeminiPromptExecutionSettings>(json, JsonOptionsCache.ReadPermissive)!;
}
Function Choice Behavior Support
We will be adding support for IChatClient to our Google Connector which will be allowing the Function Choice Behavior feature to work correctly.
Given the delay on the feature below:
- https://github.com/googleapis/google-cloud-dotnet/issues/13815
I just tried to use the google connector for tool calling
I create a GeminiPromptExecutionSettings object, then I also add FunctionChoiceBehavior.Auto() but even with both my google agent doesn't call its specified tools
var chat = _kernel.GetRequiredService<IChatCompletionService>(skSettings.ServiceId); var stream = chat.GetStreamingChatMessageContentsAsync(history, skSettings, _kernel, ct);
I just tried to use the google connector for tool calling
I create a GeminiPromptExecutionSettings object, then I also add FunctionChoiceBehavior.Auto() but even with both my google agent doesn't call its specified tools
var chat = _kernel.GetRequiredService(skSettings.ServiceId); var stream = chat.GetStreamingChatMessageContentsAsync(history, skSettings, _kernel, ct);
There is no FunctionChoiceBehavior.Auto in GeminiPromptExecutionSettings. It should be
ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions };
@sabbadino I tried that (only using GeminiToolCallBehavior.AutoInvokeKernelFunctions) , but it seems like with IChatCompletionService it still doesnt want to do tool calling for some reason
@roji looks like this is still an issue. Any reason the issue was closed?