[Bug]: Failed to add the siliconflow model in version 0.22.1
Self Checks
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (Language Policy).
- [x] Non-english title submitions will be closed directly ( 非英文标题的提交将会被直接关闭 ) (Language Policy).
- [x] Please do not modify this template :) and fill in all the required fields.
RAGFlow workspace code commit ID
1
RAGFlow image version
v0.22.1
Other environment information
Actual behavior
Failed to add the siliconflow model in version 0.22.1
Expected behavior
No response
Steps to reproduce
1. Locate "siliconflow"
2. click the "Add" button
3. enter the API key, click "Save", and an error will occur.
Additional information
No response
RAGFlow validates API keys for siliconflow and similar models by making real API calls to the provider using the key you enter. If the provider returns a 401 Unauthorized, RAGFlow reports the key as invalid. For siliconflow, the API key is sent as a Bearer token in the Authorization header, and there’s no special handling—just a plain string is expected in the UI field. This means any typo, expired, or misconfigured key will fail validation, and the same error will appear for all models tested with that key (source, source).
Here’s what to check:
- Make sure you’re copying the API key exactly as issued by siliconflow—no extra spaces or quotes.
- Confirm the key is still active and hasn’t been revoked or expired.
- If you’re using docker-compose or environment variables, ensure there are no extra quotes or formatting issues (reference).
- Double-check network connectivity from your RAGFlow server to the siliconflow API endpoint. Firewalls, proxies, or SSL issues can also cause authentication to fail (reference).
- After updating the API key or model config, restart the backend to ensure changes take effect (reference).
There’s no known bug in v0.22.1 that would cause this if the API key and network are correct. If you’re still stuck, sharing how you obtained the API key and any relevant backend logs could help pinpoint the issue.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
@yingfeng @KevinHuSh It's resolved. You should specify that the .cn key is to be used, and the .com key cannot be used.
@dosubot
{"code":100,"data":null,"message":"UnboundLocalError(\"local variable 'mode' referenced before assignment\")"}
I want to replace the built-in old-version BAAI with SiliconFlow's BGE, but the replacement of the embedding model failed.
The error happens because the backend code for the embedding check expects to process at least one valid document chunk with a stored vector, but if your knowledge base has no such chunks, the variable mode is never set and the API crashes with UnboundLocalError"/check_embedding" implementation.
To work around this, make sure your knowledge base contains documents that have been embedded (i.e., have valid vectors stored). Try re-embedding or adding new documents, then run the embedding model check again. This should avoid the bug and let you validate SiliconFlow's BGE as a replacement.
This is a backend bug—ideally, the code should handle empty samples gracefully. If you need help with the SiliconFlow BGE config or run into other errors, let me know!
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
@Dosubot
The error means RAGFlow can't find the model 'BAAI/bge-large-zh-v1.5@SILICONFLOW' because it's not properly registered or allowed. Recent updates filter out 'BAAI' and 'Builtin' factories, so even if the model is in your conf/llm_factories.json, the backend may ignore it and raise this error when you try to use it.
To resolve this, check if 'BAAI/bge-large-zh-v1.5' is listed under the 'SILICONFLOW' factory in conf/llm_factories.json with model_type 'embedding'. If it is, make sure you restart the backend service after editing the config so the changes take effect. If the error persists, it's likely due to recent backend changes that exclude 'BAAI' models from being recognized, so you may need to use a different supported model or provider instead. For more details, see this source and this discussion.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
@yingfeng @KevinHuSh I have a large amount of data that all uses the BAAI embedding model. Currently, the system has been upgraded to version 0.22.1, but I cannot switch the embedding model, and I am in a hurry.
This article is the guide on how to upgrade to 0.22.1 given historical embedding data.
@yingfeng The problem I encountered is not specified in this article.
I followed the recommendations to replace the BAAI/bge-large-zh-v1.5 that comes with the old version with the one from SiliconFlow, but the replacement has failed so far.