feat: Add n1n API as LLM provider
Description
This PR adds support for n1n API as a new LLM provider in mem0. n1n is a robust API aggregation platform that provides access to 400+ large language models through a single OpenAI-compatible API endpoint.
Key Features:
- Single API key access to 400+ models (OpenAI, Anthropic, Google, Meta, and more)
- OpenAI-compatible implementation for seamless integration
- Cost-effective pricing (up to 1/10 of official prices for some models)
- Global availability with no VPN required
- Support for multimodal capabilities (text, images, video, audio)
Implementation Details:
- Added
N1NConfigconfiguration class extendingBaseLlmConfig - Implemented
N1NLLMprovider class extendingLLMBase - Uses existing OpenAI Python SDK with custom base URL (
https://n1n.ai/v1) - Full support for tool calling and structured outputs
- Proper error handling and API key validation
Motivation: This integration was requested by the n1n team to provide mem0 users with:
- Access to a wide variety of models without managing multiple API keys
- Competitive pricing with unified billing
- Easy model switching for different use cases
- High availability through global infrastructure
Fixes #3579
Type of change
- [x] New feature (non-breaking change which adds functionality)
- [x] Documentation update
How Has This Been Tested?
Unit Tests
- [x] Created comprehensive unit test suite (
tests/llms/test_n1n.py) - [x] 11 test cases covering all functionality:
- Initialization with API key (config and environment variable)
- Error handling for missing API key
- Default model configuration
- Custom base URL support
- Response generation without tools
- Response generation with tools
- Custom response format handling
- Config conversion from
BaseLlmConfig - Dict-based configuration
- Multiple model support
- [x] All tests use mocks to avoid actual API calls
- [x] All 11 tests passing ✅
Test Results:
$ pytest tests/llms/test_n1n.py -v
========================= test session starts =========================
collected 11 items
tests/llms/test_n1n.py::test_n1n_initialization_with_api_key PASSED
tests/llms/test_n1n.py::test_n1n_initialization_with_env_var PASSED
tests/llms/test_n1n.py::test_n1n_initialization_without_api_key PASSED
tests/llms/test_n1n.py::test_n1n_default_model PASSED
tests/llms/test_n1n.py::test_n1n_custom_base_url PASSED
tests/llms/test_n1n.py::test_generate_response_without_tools PASSED
tests/llms/test_n1n.py::test_generate_response_with_tools PASSED
tests/llms/test_n1n.py::test_generate_response_with_response_format PASSED
tests/llms/test_n1n.py::test_n1n_config_conversion_from_base PASSED
tests/llms/test_n1n.py::test_n1n_config_from_dict PASSED
tests/llms/test_n1n.py::test_n1n_multiple_models PASSED
========================= 11 passed in 1.25s =========================
Integration Testing
- [x] Tested with real n1n API key
- [x] Verified successful response generation
- [x] Tested with multiple models (gpt-4o-mini, claude-3-5-sonnet)
- [x] Confirmed environment variable handling
- [x] Validated error messages for missing credentials
Test Configuration
from mem0 import Memory
config = {
"llm": {
"provider": "n1n",
"config": {
"model": "gpt-4o-mini",
"api_key": "your-n1n-api-key",
"temperature": 0.1,
"max_tokens": 2000,
}
}
}
m = Memory.from_config(config)
# Tested: add, search, get_all operations ✅
Checklist:
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
- [x] Any dependent changes have been merged and published in downstream modules
- [x] I have checked my code and corrected any misspellings
Additional Information
Files Changed
New Files (4):
mem0/configs/llms/n1n.py- Configuration class for n1nmem0/llms/n1n.py- N1NLLM provider implementationtests/llms/test_n1n.py- Comprehensive unit testsdocs/components/llms/models/n1n.mdx- User documentation
Modified Files (2):
mem0/utils/factory.py- Registered n1n in LlmFactorymem0/llms/configs.py- Added n1n to supported providers
Dependencies
- No new dependencies required ✅
- Uses existing
openaipackage (already in project dependencies) - Fully compatible with current mem0 architecture
Available Models
Users can access 400+ models including:
- OpenAI: gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o3-mini
- Anthropic: claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
- Google: gemini-2.0-flash-exp, gemini-1.5-pro
- Meta: llama-3.3-70b-instruct, llama-3.1-405b-instruct
- And 390+ more models...
Resources
- Website: https://n1n.ai/
- Documentation: https://docs.n1n.ai/
- Get API Key: https://n1n.ai/console
- Pricing: https://n1n.ai/pricing
Backward Compatibility
- [x] Fully backward compatible
- [x] No breaking changes to existing code
- [x] Follows established LLM provider patterns
Maintainer Checklist
- [ ] closes #3579
- [ ] Made sure Checks passed
Hey @ron-42 Thanks for this PR, but n1n is not on the roadmap for now, but will be added in the future. Really appreciate your efforts.
So @parshvadaftari, do i have to close the PR?
No you can keep it open, once we have this on roadmap we'll take this PR into consideration.