DB-GPT icon indicating copy to clipboard operation
DB-GPT copied to clipboard

[Bug] [Module Name] The embedding model configuration is not working

Open bxfxf opened this issue 9 months ago • 3 comments

Search before asking

  • [x] I had searched in the issues and found no similar issues.

Operating system information

Linux

Python version information

=3.11

DB-GPT version

main

Related scenes

  • [ ] Chat Data
  • [ ] Chat Excel
  • [ ] Chat DB
  • [x] Chat Knowledge
  • [ ] Model Management
  • [ ] Dashboard
  • [ ] Plugins

Installation Information

Device information

Device:CPU

Models information

LLM:deepseek Embedding model:text-embedding-v3

What happened

I first added the remote embedding model text-embedding-v3 in the model management interface and successfully started it. Then, in the embedding model configuration interface of the knowledge base, I changed the model to text-embedding-v3. However, according to the backend logs, it is still calling the embedding model specified in the config file.

What you expected to happen

The expected result is that both vectorization processing and knowledge retrieval should use the embedding model configured in the interface.

How to reproduce

  1. Configure the embedding model as text2vec-large-chinese in the config file.
  2. Stop the text2vec-large-chinese model in the model management interface, then create a new remote text-embedding-v3 model.
  3. Create a new knowledge base and change the model in the embedding model configuration interface of the knowledge base to text-embedding-v3.
  4. Upload documents and start parsing. Upon observing the backend logs, it is found that the text2vec-large-chinese model is being called instead.

Image

Additional context

No response

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

bxfxf avatar Mar 10 '25 02:03 bxfxf

[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
# If not provided, the model will be downloaded from the Hugging Face model hub
# uncomment the following line to specify the model path in the local file system
# path = "the-model-path-in-the-local-file-system"
path = "/xxx/models/bge-large-zh-v1.5"

Aries-ckt avatar Mar 11 '25 01:03 Aries-ckt

[[models.embeddings]]
name = "BAAI/bge-large-zh-v1.5"
provider = "hf"
# If not provided, the model will be downloaded from the Hugging Face model hub
# uncomment the following line to specify the model path in the local file system
# path = "the-model-path-in-the-local-file-system"
path = "/xxx/models/bge-large-zh-v1.5"

这个配置我知道的,我的意思是 在知识库配置界面的模型设置字段里设置了其他模型,不起作用

bxfxf avatar Mar 11 '25 09:03 bxfxf

我使用docker compose ud -d 启动的最新版,也试过配置这个字段,没有效果

dailyer avatar Mar 21 '25 02:03 dailyer