ragflow icon indicating copy to clipboard operation
ragflow copied to clipboard

[Question]: Fail to access embedding model(bge-large-zh-v1.5).Connection error.

Open qifuxiao opened this issue 10 months ago • 3 comments

Describe your problem

我使用docker compose顺利完成了ragflow的的部署,并且使用ollama完成了添加模型,测试任务正常,我现在想使用Xinference框架进行模型管理,当我添加的时候报错Fail to access embedding model(bge-large-zh-v1.5).Connection error.

Image

Image 我查看了ragflow-server的日志: NoneType: None 2025-02-28 10:04:43,961 INFO 14 172.19.0.6 - - [28/Feb/2025 10:04:43] "POST /v1/llm/add_llm HTTP/1.1" 200 - 2025-02-28 10:07:08,366 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/user/info HTTP/1.1" 200 - 2025-02-28 10:07:08,374 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/llm/my_llms HTTP/1.1" 200 - 2025-02-28 10:07:08,375 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/user/tenant_info HTTP/1.1" 200 - 2025-02-28 10:07:08,397 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/llm/factories HTTP/1.1" 200 - 2025-02-28 10:07:18,369 INFO 14 Retrying request to /embeddings in 0.867682 seconds 2025-02-28 10:07:19,239 INFO 14 Retrying request to /embeddings in 1.911701 seconds 2025-02-28 10:07:21,154 ERROR 14 Fail to access embedding model(bge-large-zh-v1.5).Connection error. NoneType: None 2025-02-28 10:07:21,156 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:21] "POST /v1/llm/add_llm HTTP/1.1" 200 - 2025-02-28 10:08:03,862 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/user/info HTTP/1.1" 200 - 2025-02-28 10:08:03,873 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/llm/my_llms HTTP/1.1" 200 - 2025-02-28 10:08:03,874 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/user/tenant_info HTTP/1.1" 200 - 2025-02-28 10:08:03,898 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/llm/factories HTTP/1.1" 200 - 2025-02-28 10:08:49,277 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/user/info HTTP/1.1" 200 - 2025-02-28 10:08:49,287 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/user/tenant_info HTTP/1.1" 200 - 2025-02-28 10:08:49,289 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/llm/my_llms HTTP/1.1" 200 - 2025-02-28 10:08:49,304 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/llm/factories HTTP/1.1" 200 - 2025-02-28 10:08:51,718 INFO 14 Retrying request to /embeddings in 0.913683 seconds 2025-02-28 10:08:52,634 INFO 14 Retrying request to /embeddings in 1.858031 seconds 2025-02-28 10:08:54,495 ERROR 14 Fail to access embedding model(bge-large-zh-v1.5).Connection error. 这里并没有给出有用的报错信息

qifuxiao avatar Feb 28 '25 02:02 qifuxiao

Give it a try: 172.17.0.1:9997/v1

KevinHuSh avatar Feb 28 '25 05:02 KevinHuSh

Give it a try: 172.17.0.1:9997/v1

I have a similar situation as well. How can I resolve it? It's also about Docker deployment.

ragflow-server log: 2025-03-01 00:48:19 2025-03-01 00:48:19,573 INFO 29 Retrying request to /embeddings in 0.773104 seconds 2025-03-01 00:48:20 2025-03-01 00:48:20,349 INFO 29 Retrying request to /embeddings in 1.601792 seconds 2025-03-01 00:48:21 2025-03-01 00:48:21,955 ERROR 29 2025-03-01 00:48:21 Fail to access embedding model(bge-large-zh-v1.5).Connection error. 2025-03-01 00:48:21 NoneType: None 2025-03-01 00:48:21 2025-03-01 00:48:21,957 INFO 29 172.18.0.6 - - [01/Mar/2025 00:48:21] "POST /v1/llm/add_llm HTTP/1.1" 200 -

Image

Image

syks0121 avatar Feb 28 '25 16:02 syks0121

Windows10, Doker, Local Models.

  1. Follow steps in docs to install ragflow-slim

  2. Run local models with the desktop app of your choice

    • your model
    • your text-embedder
    • etc.
  3. Run RagFlow

    • open http://127.0.0.1/
    • Sign up login : create new user
  4. RagFlow admin

    • go to /Model providers
  5. Select Models to be added

  6. Added Models > click Add LLM (repeat for chat, embedding, etc)

    • model type : chat
    • model name : usually provided by your desktop app
    • base url : provided by your desktop app e.g. http://127.0.0.1:[port number]
    • max tokens
  7. Click button System Model Settings

    • select the models added in step 6

Edit Fixed with embedder update : https://huggingface.co/kalle07/embedder_collection

hint : 102 is everywhere but not useful ! Instruction is misleading, no link to option on the GUI. Please add both embedding model and LLM in Settings > Model providers firstly. Then, set them in 'System model settings'.

iopenet avatar Mar 10 '25 16:03 iopenet

Describe your problem

我使用docker compose顺利完成了ragflow的的部署,并且使用ollama完成了添加模型,测试任务正常,我现在想使用Xinference框架进行模型管理,当我添加的时候报错Fail to access embedding model(bge-large-zh-v1.5).Connection error.

Image

Image 我查看了ragflow-server的日志: NoneType: None 2025-02-28 10:04:43,961 INFO 14 172.19.0.6 - - [28/Feb/2025 10:04:43] "POST /v1/llm/add_llm HTTP/1.1" 200 - 2025-02-28 10:07:08,366 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/user/info HTTP/1.1" 200 - 2025-02-28 10:07:08,374 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/llm/my_llms HTTP/1.1" 200 - 2025-02-28 10:07:08,375 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/user/tenant_info HTTP/1.1" 200 - 2025-02-28 10:07:08,397 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:08] "GET /v1/llm/factories HTTP/1.1" 200 - 2025-02-28 10:07:18,369 INFO 14 Retrying request to /embeddings in 0.867682 seconds 2025-02-28 10:07:19,239 INFO 14 Retrying request to /embeddings in 1.911701 seconds 2025-02-28 10:07:21,154 ERROR 14 Fail to access embedding model(bge-large-zh-v1.5).Connection error. NoneType: None 2025-02-28 10:07:21,156 INFO 14 172.19.0.6 - - [28/Feb/2025 10:07:21] "POST /v1/llm/add_llm HTTP/1.1" 200 - 2025-02-28 10:08:03,862 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/user/info HTTP/1.1" 200 - 2025-02-28 10:08:03,873 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/llm/my_llms HTTP/1.1" 200 - 2025-02-28 10:08:03,874 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/user/tenant_info HTTP/1.1" 200 - 2025-02-28 10:08:03,898 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:03] "GET /v1/llm/factories HTTP/1.1" 200 - 2025-02-28 10:08:49,277 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/user/info HTTP/1.1" 200 - 2025-02-28 10:08:49,287 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/user/tenant_info HTTP/1.1" 200 - 2025-02-28 10:08:49,289 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/llm/my_llms HTTP/1.1" 200 - 2025-02-28 10:08:49,304 INFO 14 172.19.0.6 - - [28/Feb/2025 10:08:49] "GET /v1/llm/factories HTTP/1.1" 200 - 2025-02-28 10:08:51,718 INFO 14 Retrying request to /embeddings in 0.913683 seconds 2025-02-28 10:08:52,634 INFO 14 Retrying request to /embeddings in 1.858031 seconds 2025-02-28 10:08:54,495 ERROR 14 Fail to access embedding model(bge-large-zh-v1.5).Connection error. 这里并没有给出有用的报错信息

I have a similar situation as well

GXKIM avatar Apr 10 '25 05:04 GXKIM

I have the same issue and console log in ragflow-server. Any suggesiton for this issue?

127.0.0.1: Connection error.

Image

My local IP: Request time out. Image

channingy avatar Apr 11 '25 07:04 channingy

Try IP: 172.17.0.1

KevinHuSh avatar Apr 11 '25 12:04 KevinHuSh

@channingy Is it solved?

luohao-svg avatar Apr 15 '25 08:04 luohao-svg

yes

luohao-svg @.***>于2025年4月15日 周二16:02写道:

@channingy https://github.com/channingy Is it solved?

— Reply to this email directly, view it on GitHub https://github.com/infiniflow/ragflow/issues/5460#issuecomment-2804176914, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGIJ4FSFOQDVEW7PYRGD2ZS4H3AVCNFSM6AAAAABYBH2XIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMBUGE3TMOJRGQ . You are receiving this because you commented.Message ID: @.***> luohao-svg left a comment (infiniflow/ragflow#5460) https://github.com/infiniflow/ragflow/issues/5460#issuecomment-2804176914

@channingy https://github.com/channingy Is it solved?

— Reply to this email directly, view it on GitHub https://github.com/infiniflow/ragflow/issues/5460#issuecomment-2804176914, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGIJ4FSFOQDVEW7PYRGD2ZS4H3AVCNFSM6AAAAABYBH2XIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMBUGE3TMOJRGQ . You are receiving this because you commented.Message ID: @.***>

GXKIM avatar Apr 15 '25 08:04 GXKIM

Can you tell me how to fix it I'm also having this issue now and I can't add the model anymore when I modify the ip

luohao-svg avatar Apr 15 '25 08:04 luohao-svg