langchaingo
langchaingo copied to clipboard
draft: memory: enhanced token-based memory management
This PR proposes some ideas for memory management, related to Discussion #124 and Discussion #317 about context length handling.
Configuration Example
memoryBuffer := memory.NewEnhancedTokenBuffer(
memory.WithTokenLimit(2000), // Maximum tokens to keep
memory.WithEncodingModel("gpt-3.5-turbo"), // Model for token counting
memory.WithTrimStrategy(memory.TrimOldest), // How to trim messages
memory.WithPreservePairs(true), // Keep human-AI pairs together
memory.WithMinMessages(2), // Minimum messages to preserve
)
comments welcome.