[Bug] [Module Name] The embedding model configuration is not working
Search before asking
- [x] I had searched in the issues and found no similar issues.
Operating system information
Linux
Python version information
=3.11
DB-GPT version
main
Related scenes
- [ ] Chat Data
- [ ] Chat Excel
- [ ] Chat DB
- [x] Chat Knowledge
- [ ] Model Management
- [ ] Dashboard
- [ ] Plugins
Installation Information
-
[ ] AutoDL Image
-
[ ] Other
Device information
Device:CPU
Models information
LLM:deepseek Embedding model:text-embedding-v3
What happened
I first added the remote embedding model text-embedding-v3 in the model management interface and successfully started it. Then, in the embedding model configuration interface of the knowledge base, I changed the model to text-embedding-v3. However, according to the backend logs, it is still calling the embedding model specified in the config file.
What you expected to happen
The expected result is that both vectorization processing and knowledge retrieval should use the embedding model configured in the interface.
How to reproduce
- Configure the embedding model as text2vec-large-chinese in the config file.
- Stop the text2vec-large-chinese model in the model management interface, then create a new remote text-embedding-v3 model.
- Create a new knowledge base and change the model in the embedding model configuration interface of the knowledge base to text-embedding-v3.
- Upload documents and start parsing. Upon observing the backend logs, it is found that the text2vec-large-chinese model is being called instead.
Additional context
No response
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
# If not provided, the model will be downloaded from the Hugging Face model hub
# uncomment the following line to specify the model path in the local file system
# path = "the-model-path-in-the-local-file-system"
path = "/xxx/models/bge-large-zh-v1.5"
[[models.embeddings]] name = "BAAI/bge-large-zh-v1.5" provider = "hf" # If not provided, the model will be downloaded from the Hugging Face model hub # uncomment the following line to specify the model path in the local file system # path = "the-model-path-in-the-local-file-system" path = "/xxx/models/bge-large-zh-v1.5"
这个配置我知道的,我的意思是 在知识库配置界面的模型设置字段里设置了其他模型,不起作用
我使用docker compose ud -d 启动的最新版,也试过配置这个字段,没有效果