semantic-kernel
semantic-kernel copied to clipboard
Integrate cutting-edge LLM technology quickly and easily into your apps
### Feature Request Include Gemini thought summaries as reasoning content ### Description Currently, **Gemini 2.5 Pro** returns its internal thoughts when the `include_thoughts` parameter is set. It would be great...
--- name: Adding Introspection of Pandas DataFrame about: Currently when using pandas dataframe type in the kernel functions, registering the kernel throws a NameError: name 'weakref' is not defined. Adding...
#11813 dotnet need it too ``` Unhandled exception. Microsoft.SemanticKernel.HttpOperationException: HTTP 400 (invalid_request_error: invalid_parameter_error) parameter.enable_thinking must be set to false for non-streaming calls ---> System.ClientModel.ClientResultException: HTTP 400 (invalid_request_error: invalid_parameter_error) parameter.enable_thinking must...
- [x] Understand the current Google connector implementation and identify what needs to be changed - [x] Add FunctionChoiceBehavior support to GeminiPromptExecutionSettings - [x] Update GeminiChatCompletionClient to handle FunctionChoiceBehavior -...
--- name: Pass Agent Thread to orchestration about: Agents orchestration --- As a developer, to have more control and use more of features, I wish to pass whole AgentThread instread...
**Describe the bug** Can't connect to my o4-mini model deployment. Tried with and without apiVersion parameter. Code works properly when I change to my gpt-4o model deployment. Error: Model deployment...
In this cell ``` from semantic_kernel import Kernel from semantic_kernel.connectors.ai.hugging_face import HuggingFaceTextCompletion, HuggingFaceTextEmbedding from semantic_kernel.core_plugins import TextMemoryPlugin from semantic_kernel.memory import SemanticTextMemory, VolatileMemoryStore kernel = Kernel() # Configure LLM service if...
**Describe the bug** The ChatCompletionAgent is not emitting agent telemetry details, it is only emitting tracing for the chat_completion operation. **To Reproduce** Steps to reproduce the behavior: 1. Setup OpenTelemetry...
**Describe the bug** I use filter to check request sequence index and limit number of function calls to 3. When request sequnce index is higher than x i set context-Terminate...
#### Bug **An LLM is calling multiple tools in an array and semantic-kernel is printing the JSON as text instead of calling the function.** The LLM I am using is...