Langchain-Chatchat
Langchain-Chatchat copied to clipboard
使用reranker模型后,报错 API通信遇到错误:[WinError 10054] An existing connection was forcibly closed by the remote host
terminal的报错信息: 2024-02-27 10:22:26,723 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63229 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-02-27 10:22:26,723 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-02-27 10:22:27,256 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63229 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-02-27 10:22:27,257 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63229 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-02-27 10:22:27,260 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" 2024-02-27 10:22:30,995 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63239 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-02-27 10:22:30,997 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-02-27 10:22:31,447 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63239 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-02-27 10:22:31,450 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63239 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-02-27 10:22:31,463 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63239 - "GET /knowledge_base/list_knowledge_bases HTTP/1.1" 200 OK 2024-02-27 10:22:31,483 - _client.py[line:1027] - INFO: HTTP Request: GET http://127.0.0.1:7861/knowledge_base/list_knowledge_bases "HTTP/1.1 200 OK" 2024-02-27 10:22:37,525 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63245 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-02-27 10:22:37,528 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-02-27 10:22:37,971 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63245 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-02-27 10:22:37,974 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63245 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-02-27 10:22:37,985 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:63245 - "GET /knowledge_base/list_knowledge_bases HTTP/1.1" 200 OK 2024-02-27 10:22:37,993 - _client.py[line:1027] - INFO: HTTP Request: GET http://127.0.0.1:7861/knowledge_base/list_knowledge_bases "HTTP/1.1 200 OK" INFO: 127.0.0.1:63245 - "POST /chat/knowledge_base_chat HTTP/1.1" 200 OK 2024-02-27 10:22:38,066 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/knowledge_base_chat "HTTP/1.1 200 OK" 2024-02-27 10:22:40 | INFO | httpx | HTTP Request: POST https://ai.ludp.lenovo.com/apps/projects/115/deshun/llm/api/v1/engines/bge-large-en-v1.5/embeddings "HTTP/1.1 200 OK" 2024-02-27 10:22:40 | INFO | root | loading vector store in 'classifier_print/vector_store/ludpai-api-bge-large-en-v1.5' from disk. 2024-02-27 10:22:40 | INFO | faiss.loader | Loading faiss with AVX2 support. 2024-02-27 10:22:40 | INFO | faiss.loader | Could not load library with AVX2 support due to: ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'") 2024-02-27 10:22:40 | INFO | faiss.loader | Loading faiss. 2024-02-27 10:22:41 | INFO | faiss.loader | Successfully loaded faiss. -----------------model path------------------ D:/AIGC-SCI/Code/langchain-chatchat-v2/configs/reranker/bge-reranker-large 2024-02-27 10:22:49,125 - utils.py[line:197] - ERROR: ReadError: API通信遇到错误:[WinError 10054] An existing connection was forcibly closed by the remote host
环境信息 / Environment Information
- langchain-ChatGLM 版本/commit 号:0.2.10
- 是否使用 Docker 部署:否
- 使用的模型:baichuan13B
- 使用的 Embedding 模型(moka-ai/m3e-base 等):m3e-base
- 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss
- Python 版本 / Python version:3.11
🤖
嗨,@Miracle-Xu!很高兴再次见到你。希望你今天过得不错!
根据您提供的错误日志,问题可能出现在reranker模型的加载过程中,具体来说,可能是在尝试从远程主机获取模型时,连接被远程主机强制关闭。这可能是由于网络问题,或者远程主机的问题。
在Langchain-Chatchat的问题跟踪中,我找到了一些类似的已解决问题,可能对您有所帮助:
这些问题的解决方案主要包括:
- 确保PyTorch的版本与CUDA支持匹配。
- 确保所有必要的服务都在运行,并且地址和端口正确配置。
- 更新到最新的代码后重试。
- 确保EMBEDDING_MODEL从本地路径加载,以避免需要互联网连接。修改HuggingFaceEmbeddings的代码以支持完全离线使用。
- 在
model_config中更改向量模型文件路径为本地路径,例如将"text2vec"的embedding_model_dict值更新为本地位置,如"text2vec": "/opt/*****/text2vec-large-chinese"。同样,将LLM的文件路径也更新为本地路径。
如果以上的解决方案不能解决您的问题,或者您需要更多的帮助,欢迎您提供更多的错误详情或者日志,以便我们能更好地帮助您解决问题。
Sources
- 可能是模型配置出问题了?
- [BUG] 运行之后,对话不反应,api.py报错,APIConnectionError、我看好多人遇到相同的问题,都没找到解决方法吗
- 远程主机强迫关闭了一个现有的连接
- server/reranker/reranker.py
- server/llm_api.py
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
你这个模型下载下来了吗
你这个模型下载下来了吗
嗯嗯
[WinError 10054] An existing connection was forcibly closed by the remote host 你这个是防火墙的问题,建议检查端口是否正常。非项目代码问题