kernel-memory icon indicating copy to clipboard operation
kernel-memory copied to clipboard

RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.

Results 122 kernel-memory issues
Sort by recently updated
recently updated
newest added

## Motivation and Context (Why the change? What's the scenario?) See #408.

waiting for author
work in progress

## Motivation and Context (Why the change? What's the scenario?) Things are moving fast in the Vector Support for SQL Azure. Now that the official VECTOR type has been introduced...

waiting for author

### Context / Scenario I am using latest ollama connector and qdrant db with `llama3.2:latest`. Below is the code snippet - ``` using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using Microsoft.KernelMemory; using Microsoft.KernelMemory.AI.Ollama;...

bug
triage

## Motivation and Context (Why the change? What's the scenario?) Azure Redis Enterprise does not support FT.CONFIG commands which causes the method to fail completely and as this is used...

### Context / Scenario Whenever executing GetListAsync, (for me this happened when I wanted to delete a specific memory as it first need to get a list of memories from...

bug
triage

# Microsoft Kernel Memory Python Libraries 🐍 This directory contains three Python packages that provide different ways to interact with Microsoft Kernel Memory: ## 📦 kernel-memory-client An autogenerated Python client...

paused

## Motivation and Context (Why the change? What's the scenario?) 1. The ReadPipelineStatusAsync returns a DataPipeline instead of a DataPipelineStatus. 2. The ReadPipelineSummaryAsync returns a DataPipelineStatus instead of Summary (I...

paused

## Motivation and Context (Why the change? What's the scenario?) I want to be able to change models from openai text generation in runtime. ## High level description (Approach, Design)...

waiting for author

### Context / Scenario If I try to use Kernel Memory with gpt-o1, the text generation throws an exception. ------------------------------------------------------------------ HTTP 400 (invalid_request_error: unsupported_parameter) Parameter: max_tokens Unsupported parameter: 'max_tokens' is...

bug
triage