[enhancement] Add reasoning_effort parameter support for Azure/OpenAI configs
Description
Added support for the reasoning_effort parameter in AzureOpenAIConfig and OpenAIConfig classes to enable testing and comparison of different reasoning effort levels ("low", "medium", "high") supported by OpenAI's reasoning models (o1, o3, gpt-5).
The parameter was recently added to OpenAI SDK but was not implemented in Mem0's configuration classes, causing a TypeError when users tried to pass reasoning_effort in their config.
This change enables users to evaluate performance and latency trade-offs across reasoning models directly within Mem0.
Fixes #3651
Type of change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (does not change functionality, e.g. code style improvements, linting)
- [ ] Documentation update
How Has This Been Tested?
Tested by initializing both config classes with the reasoning_effort parameter and verifying:
- Parameter is accepted and stored correctly
- Parameter is included in API params for reasoning models (o1, o3, gpt-5)
- Parameter is excluded for non-reasoning models (gpt-4, etc.)
- All existing unit tests pass
- [x] Unit Test
- [x] Test Script (please provide)
# Test script used
from mem0.configs.llms.azure import AzureOpenAIConfig
from mem0.configs.llms.openai import OpenAIConfig
# Test AzureOpenAIConfig
azure_config = AzureOpenAIConfig(
model="o1-preview",
reasoning_effort="medium"
)
assert azure_config.reasoning_effort == "medium"
# Test OpenAIConfig
openai_config = OpenAIConfig(
model="o1-mini",
reasoning_effort="low"
)
assert openai_config.reasoning_effort == "low"
Fixes #3651
Hey @agam1092005 thanks for sending this PR , could you update the title to [enhancement] , and then I will start the discussion and the review , could you also point me to this new change in OpenAIConfig , maybe a release note ?
Hi @parshvadaftari
Thanks for reviewing! I've updated the PR title to include [enhancement].
Regarding the reasoning_effort parameter in OpenAI SDK, here are the references:
OpenAI Documentation & Release Notes:
-
Official API Reference: Chat Completions - reasoning_effort parameter
- Supported values: "low", "medium", "high"
- Available for reasoning models: o1, o3, gpt-5 series
-
OpenAI Python SDK (v1.54.0+): The parameter was added to support reasoning models
-
Azure OpenAI Documentation: Azure OpenAI reasoning models
- Confirms Azure OpenAI also supports this parameter for reasoning models
Context:
The parameter controls the inference-time compute budget for reasoning models. Higher effort levels use more tokens and time but potentially provide better reasoning quality. This is particularly useful for evaluating performance/latency trade-offs.
Let me know if you need any additional information or changes! 🙂