When setting up the models, I was able to successfully configure the models from Silicon Flow (硅基流动) and Tencent Cloud. However, during the chat process, an error occurred
Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
RAGFlow workspace code commit ID
无
RAGFlow image version
v0.16.0 slim
Other environment information
ubuntu22.04的虚拟机
Actual behavior
设置模型时可以设置成功硅基流动及腾讯云的大模型,但在聊天过程中报错 ERROR: LLM(deepseek-r1___OpenAI-API@OpenAI-) not found
Expected behavior
希望能正常调用api接口。前几天没升级v0.16.0的时候,使用的是v0.15.0的版本正常,但里面没有deepseek r1的模型,于是升级,升级后出现上面提示的错误。 使用自建的ollama的模型正常。
Steps to reproduce
聊天时出错提示:
ERROR: LLM(deepseek-r1___OpenAI-API@OpenAI-) not found
日志错误提示:
Traceback (most recent call last): File "/ragflow/api/apps/conversation_app.py", line 230, in stream for ans in chat(dia, msg, True, **req): File "/ragflow/api/db/services/dialog_service.py", line 186, in chat raise LookupError("LLM(%s) not found" % dialog.llm_id) LookupError: LLM(deepseek-r1___OpenAI-API@OpenAI-) not found
Additional information
No response
Actual behavior: When setting up the models, I was able to successfully configure the models from Silicon Flow (硅基流动) and Tencent Cloud. However, during the chat process, an error occurred. ERROR: LLM(deepseek-r1___OpenAI-API@OpenAI-) not found
Expected behavior: I hope to be able to call the API interface normally. A few days ago, before upgrading to version v0.16.0, I was using version v0.15.0, which worked fine, but it did not have the DeepSeek R1 model. Therefore, I upgraded. After the upgrade, the error mentioned above occurred. Using my own Ollama model works normally
Select the LLM successfully added in dialog setting.
LLM(deepseek-r1___OpenAI-API@OpenAI-) is not from either Silicon Flow nor Tencent Cloud.
When using Silicon Flow's own interface, a similar error occurs during chat conversations as follows: ERROR: LLM(deepseek-ai/DeepSeek-R1@SILICONF) not found. However, this model does exist