mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

feat: Add SiliconFlow LLM support

Open vedant381 opened this issue 2 months ago • 2 comments

This commit introduces support for SiliconFlow as a new LLM provider. This allows users to leverage SiliconFlow's models within the mem0 ecosystem, expanding the range of available language models.

The key changes in this commit are:

  • Addition of mem0/llms/siliconflow.py, which contains the core implementation for interacting with the SiliconFlow API.
  • Creation of mem0/configs/llms/siliconflow.py to provide default configurations for SiliconFlow models.
  • Implementation of tests/llms/test_siliconflow.py to ensure the reliability and correctness of the new integration.
  • Introducing mem0/openai_error_codes.py to include specific error codes for the SiliconFlow integration, improving error handling and diagnostics. This can be further extended in places where OpenAI client calls are being made.

Fixes #3576

Please delete options that are not relevant.

  • [x] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Refactor (does not change functionality, e.g. code style improvements, linting)
  • [ ] Documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Please delete options that are not relevant.

  • [x] Unit Test
  • [ ] Test Script (please provide)

Checklist:

  • [ ] My code follows the style guidelines of this project
  • [x] I have performed a self-review of my own code
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] My changes generate no new warnings
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] New and existing unit tests pass locally with my changes
  • [ ] Any dependent changes have been merged and published in downstream modules
  • [x] I have checked my code and corrected any misspellings

Maintainer Checklist

  • [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
  • [ ] Made sure Checks passed

vedant381 avatar Oct 14 '25 21:10 vedant381

Hey @vedant381 Thanks for this integration and really appreciate your effort. Currently we don't have SiliconFlow on the roadmap.

parshvadaftari avatar Oct 15 '25 12:10 parshvadaftari

Hey @vedant381 Thanks for this integration and really appreciate your effort. Currently we don't have SiliconFlow on the roadmap.

No worries, and thanks again for the response! I really enjoyed working on the integration — I’ll keep it available on my fork in case it’s useful in the future.

I also wanted to point out a potential issue: for LLM providers such as DeepseekLLM, LMStudioLLM, OpenAIStructuredLLM, and others that use the OpenAI client, the response generation methods currently don’t include error handling for models that don’t support tool calling. This can be a valid scenario for certain models — for example, the tencent/Hunyuan-MT-7B model I integrated in SiliconFlowLLM doesn’t support tool calling, which surfaced during testing.

It might be worth reviewing whether this is an actual issue or expected behavior.

vedant381 avatar Oct 15 '25 15:10 vedant381