Open-LLM-VTuber icon indicating copy to clipboard operation
Open-LLM-VTuber copied to clipboard

Add Chinese LLM Supplyers 添加中国的一些LLM 供应商

Open mastwet opened this issue 1 month ago • 3 comments

更新日志 Update Log

新增功能 New Features

扩展OpenAI兼容模型供应商支持 Expanded OpenAI-Compatible Model Provider Support

我们很高兴地宣布,项目现在支持以下8个新的OpenAI兼容模型供应商: We're excited to announce that the project now supports 8 new OpenAI-compatible model providers:

  1. 硅基流动 (SiliconFlow)

    • API端点: https://api.siliconflow.cn/v1
    • 在配置文件中使用 siliconflow_llm 供应商
  2. 阿里云百炼

    • API端点: https://dashscope.aliyuncs.com/compatible-mode/v1
    • 在配置文件中使用 aliyun_bailian_llm 供应商
  3. 月之暗面 (Moonshot)

    • API端点: https://api.moonshot.cn/v1
    • 在配置文件中使用 moonshot_llm 供应商
  4. OpenRouter

    • API端点: https://openrouter.ai/api/v1
    • 在配置文件中使用 openrouter_llm 供应商
  5. Minimax

    • API端点: https://api.minimax.chat/v1
    • 在配置文件中使用 minimax_llm 供应商
  6. 阶跃星辰 (SteepSpeed)

    • API端点: https://api.stepfun.com/v1
    • 在配置文件中使用 steepspeed_llm 供应商
  7. 火山引擎

    • API端点: https://ark.cn-beijing.volces.com/api/v3
    • 在配置文件中使用 volcengine_llm 供应商
  8. SiliconFlow

    • API Endpoint: https://api.siliconflow.cn/v1
    • Provider in config file: siliconflow_llm
  9. Aliyun Bailian

    • API Endpoint: https://dashscope.aliyuncs.com/compatible-mode/v1
    • Provider in config file: aliyun_bailian_llm
  10. Moonshot

    • API Endpoint: https://api.moonshot.cn/v1
    • Provider in config file: moonshot_llm
  11. OpenRouter

    • API Endpoint: https://openrouter.ai/api/v1
    • Provider in config file: openrouter_llm
  12. Minimax

    • API Endpoint: https://api.minimax.chat/v1
    • Provider in config file: minimax_llm
  13. SteepSpeed

    • API Endpoint: https://api.stepfun.com/v1
    • Provider in config file: steepspeed_llm
  14. VolcEngine

    • API Endpoint: https://ark.cn-beijing.volces.com/api/v3
    • Provider in config file: volcengine_llm

所有这些供应商都已添加到配置模板中,用户只需在配置文件中设置相应的API密钥和模型名称即可使用。

All these providers have been added to the configuration template. Users can simply set the corresponding API key and model name in their config file to use them.

改进 Improvements

配置管理 Configuration Management

  • 更新了 .gitignore 文件,添加了对聊天历史、AI助手工作文件等的忽略规则

  • 完善了配置模板文件,包含了所有新增供应商的配置示例

  • Updated .gitignore file to include rules for ignoring chat history, AI assistant work files, etc.

  • Improved configuration template file with configuration examples for all newly added providers.

代码质量 Code Quality

  • 遵循项目模块化设计原则,复用现有的OpenAI兼容实现

  • 保持了代码的一致性和可维护性

  • Adhered to the project's modular design principles, reusing existing OpenAI-compatible implementations

  • Maintained code consistency and maintainability

使用说明 Usage Instructions

要使用这些新的模型供应商,请在您的 conf.yaml 配置文件中进行相应配置。例如,要使用硅基流动:

To use these new model providers, configure your conf.yaml file accordingly. For example, to use SiliconFlow:

agent_config:
  agent_settings:
    basic_memory_agent:
      llm_provider: 'siliconflow_llm'

llm_configs:
  siliconflow_llm:
    llm_api_key: 'your-api-key-here'
    model: 'deepseek-ai/DeepSeek-V3.1'
    temperature: 1.0

每种供应商的具体配置示例已在配置模板中提供。 Specific configuration examples for each provider are available in the configuration template.

Test Result:

I Only Screenshot some of my test result,but trust me,every had been tested.This is not a complex modify,so not affect original stack.

1、SiliconFlow. OK 2、Aliyun Bailian OK 3、Moonshot OK 4、OpenRouter. OK 3882a2b2b241a9bb23f531d0a1998462 5、Minimax. OK 0c7ecffa65db2c1313ca7705d793e6e9 6、SteepSpeed OK 7、VolcEngine OK 5f1e2a5eb2fcab086cf3a35c3d7a0e9c

mastwet avatar Oct 28 '25 06:10 mastwet

Summary of Changes

Hello @mastwet, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's flexibility by integrating several new OpenAI-compatible Large Language Model (LLM) providers, with a particular focus on Chinese providers. The changes allow users to easily configure and switch between a broader selection of LLMs, supported by updated configuration templates and a streamlined integration approach that leverages existing code architecture.

Highlights

  • Expanded LLM Provider Support: Added support for 7 new OpenAI-compatible Large Language Model (LLM) providers: SiliconFlow, Aliyun Bailian, Moonshot, OpenRouter, Minimax, SteepSpeed, and VolcEngine. This significantly increases the range of available LLM backends.
  • Configuration Updates: The default configuration templates (conf.default.yaml and conf.ZH.default.yaml) have been updated to include examples for all newly added LLM providers. The .gitignore file was also updated to ignore chat history and AI assistant work files.
  • Codebase Integration: The new providers are integrated by reusing the existing OpenAI-compatible implementation, ensuring modularity and maintaining code consistency. This involves updates to the LLM factory and configuration management modules.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

gemini-code-assist[bot] avatar Oct 28 '25 06:10 gemini-code-assist[bot]

草,发现更新日志写错了,难绷,不过不碍事(

mastwet avatar Oct 28 '25 07:10 mastwet

其实都是 openai compatible 的话,可以不用增加 conf.yaml 的复杂度了 我觉得可以考虑 2.0.0 版本再直接接入 base url 的 preset,不过我在这说可能你也不懂我说什么,本来我打算等 Tim 通过我的 REAMDE 的修改 https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/pull/298 就在主仓库建立 2.0.0 分支的 不过我现在就开始迁移吧:)如果看了 2.0.0 的设计你应该就能明白了(但文档是一件很苦恼的事情 T_T)

ylxmf2005 avatar Nov 15 '25 10:11 ylxmf2005