mem0
mem0 copied to clipboard
Feat/llm monitoring callback
Description
Add Response Monitoring Callback to BaseLlmConfig and OpenAILLM
Summary
This PR introduces a new response_callback parameter to the BaseLlmConfig class and implements its invocation in OpenAILLM. This feature enables monitoring and logging of LLM responses for debugging, analytics, and observability purposes. Additional documentation and comprehensive tests have been added.
Changes
- Added
response_callbackparameter toBaseLlmConfig- New optional callback with signature:
(llm_instance: Any, raw_response: dict, params: dict) -> None - Allows users to pass a monitoring function that receives:
- The LLM instance
- The raw response object
- The parameters used in the request
- New optional callback with signature:
- Implemented callback invocation in
OpenAILLM- The callback is called after response parsing but before returning the result
- Only invoked when the callback is provided in configuration
- Added documentation for the new parameter
- Updated LLM configuration documentation to include
response_callback
- Updated LLM configuration documentation to include
- Added comprehensive test coverage
- Added 4 new tests covering:
- Basic callback invocation with correct arguments
- No callback scenario
- Exception handling in callbacks
- Callback behavior with tool responses
- Added 4 new tests covering:
Usage Example
def monitoring_callback(llm_instance, raw_response, params):
# Implement custom monitoring logic
print(f"Received response from {llm_instance.config.model}:")
print(f"Params: {params}")
print(f"Response: {raw_response}")
config = {
...
"llm": {
"provider": "openai",
"config": {
"model": "qwen/qwen3-30b-a3b:free",
"api_key": os.environ.get("OPENROUTER_API_KEY"),
"openrouter_base_url": "https://openrouter.ai/api/v1",
"response_callback": monitoring_callback,
},
}
...
}
memory = Memory.from_config(config)
memory.add(messages)
Benefits
- Enables real-time monitoring of LLM interactions
- Facilitates debugging by exposing raw responses
- Allows for custom analytics and logging implementations
- Provides insight into API parameters used for each request
- Backward compatible (optional parameter)
- Robust exception handling ensures callback errors don't break core functionality
- Comprehensive test coverage ensures reliability
Fixes # (issue)
Type of change
Please delete options that are not relevant.
- [x] New feature (non-breaking change which adds functionality)
- [x] Documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Testing
All new tests pass successfully:
$ pytest tests/llms/test_openai.py
...
4 passed in 0.12s
Please delete options that are not relevant.
- [x] Unit Test
Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules
- [ ] I have checked my code and corrected any misspellings
Maintainer Checklist
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] Made sure Checks passed