Langchain-Chatchat
Langchain-Chatchat copied to clipboard
[BUG] 使用api,针对知识库问答时,非openAI在线大模型(qianfan-api和zhipu-api)会报APIConnectionError错误。
问题描述 使用知识库问答时发生如上问题。 上个星期用的时候还是没问题的,这个星期重新搭建了一次就一直报错,依赖库版本可能和上一次不一致。 核心报错信息如下: 2024-04-24 12:44:01,234 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 0.957255 seconds 2024-04-24 12:44:02,192 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 1.747860 seconds 2024-04-24 12:44:03,943 - utils.py[line:38] - ERROR: Connection error 出错代码: /chatchat-space/server/chat/knowledge_base_chat.py的112行 task = asyncio.create_task(wrap_done( chain.acall({"context": context, "question": query}), callback.done), ) 求助。
完整报错信息(隐去key) 操作系统:Linux-5.15.0-92-generic-x86_64-with-glibc2.35. python版本:3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['qianfan-api'] @ cpu {'api_key': '*******************', 'device': 'auto', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21004, 'provider': 'QianFanWorker', 'secret_key': '*****************************', 'version': 'ERNIE-Bot', 'version_url': '', 'worker_class': <class 'server.model_workers.qianfan.QianFanWorker'>} 当前Embbedings模型: m3e-base @ cpu ==============================Langchain-Chatchat Configuration==============================
2024-04-24 12:43:53,612 - startup.py[line:655] - INFO: 正在启动服务: 2024-04-24 12:43:53,612 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /home/lxy/chatchat-space/logs INFO: Started server process [17786] INFO: Waiting for application startup.
==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.15.0-92-generic-x86_64-with-glibc2.35. INFO: Application startup complete. python版本:3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['qianfan-api'] @ cpu {'api_key': 'mVCljMqBpvJQhXiq2F4lp9Vs', 'device': 'auto', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21004, 'provider': 'QianFanWorker', 'secret_key': 'zCGESU9AUebX3zdzKU8UZodY8KtoURkP', 'version': 'ERNIE-Bot', 'version_url': '', 'worker_class': <class 'server.model_workers.qianfan.QianFanWorker'>} 当前Embbedings模型: m3e-base @ cpu
服务端运行信息: Chatchat API Server: http://127.0.0.1:7861 ==============================Langchain-Chatchat Configuration==============================
INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)
INFO: 119.39.128.211:51496 - "POST /chat/knowledge_base_chat HTTP/1.1" 200 OK
/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI.
warn_deprecated(
2024-04-24 12:44:00,395 - SentenceTransformer.py[line:66] - INFO: Load pretrained SentenceTransformer: /home/models/moka-ai/m3e-base
2024-04-24 12:44:01,178 - faiss_cache.py[line:92] - INFO: loading vector store in 'samples/vector_store/m3e-base' from disk.
2024-04-24 12:44:01,180 - loader.py[line:54] - INFO: Loading faiss with AVX2 support.
2024-04-24 12:44:01,192 - loader.py[line:56] - INFO: Successfully loaded faiss with AVX2 support.
/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/faiss.py:121: UserWarning: Normalizing L2 is not applicable for metric type: METRIC_INNER_PRODUCT
warnings.warn(
2024-04-24 12:44:01,234 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 0.957255 seconds
2024-04-24 12:44:02,192 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 1.747860 seconds
2024-04-24 12:44:03,943 - utils.py[line:38] - ERROR: Connection error.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
yield
File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 371, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
raise exc from None
File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
response = await connection.handle_async_request(
File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 99, in handle_async_request
raise exc
File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 76, in handle_async_request
stream = await self._connect(request)
File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 122, in _connect
stream = await self._network_backend.connect_tcp(**kwargs)
File "/usr/local/lib/python3.10/dist-packages/httpcore/_backends/auto.py", line 30, in connect_tcp
return await self._backend.connect_tcp(
File "/usr/local/lib/python3.10/dist-packages/httpcore/_backends/anyio.py", line 114, in connect_tcp
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.10/dist-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: All connection attempts failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1435, in _request response = await self._client.send( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1646, in send response = await self._send_handling_auth( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1674, in _send_handling_auth response = await self._send_handling_redirects( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1711, in _send_handling_redirects response = await self._send_single_request(request) File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1748, in _send_single_request response = await transport.handle_async_request(request) File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 370, in handle_async_request with map_httpcore_exceptions(): File "/usr/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: All connection attempts failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/lxy/chatchat-space/server/utils.py", line 36, in wrap_done await fn File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 385, in acall raise e File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 379, in acall await self._acall(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 554, in agenerate_prompt return await self.agenerate( File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 514, in agenerate raise exceptions[0] File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 617, in _agenerate_with_cache return await self._agenerate( File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate return await agenerate_from_stream(stream_iter) File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 87, in agenerate_from_stream async for chunk in stream: File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 488, in _astream async for chunk in await acompletion_with_retry( File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 105, in acompletion_with_retry return await llm.async_client.create(**kwargs) File "/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py", line 1300, in create return await self._post( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1703, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1406, in request return await self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1459, in _request return await self._retry_request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1528, in _retry_request return await self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1459, in _request return await self._retry_request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1528, in _retry_request return await self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1469, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error. 2024-04-24 12:44:03,944 - utils.py[line:40] - ERROR: APIConnectionError: Caught exception: Connection error.
The APIConnectionError you're encountering with the qianfan-api and zhipu-api could be due to several factors. Here's a focused approach to troubleshoot and potentially resolve the issue:
-
API Key and Secret Key Verification: Double-check that the API key and secret key are correct. Incorrect credentials can prevent successful API requests.
-
Network Connectivity: Ensure the server has a stable connection to
https://aip.baidubce.com. Connection issues can lead to the errors you're seeing. -
Baidu Service Status: Check for any service disruptions or maintenance on Baidu's end that might affect API availability.
-
Dependencies Update: Make sure all dependencies, especially
httpx, are up to date to avoid compatibility issues. -
Enhanced Error Handling: Improve error handling and logging for the API call sections to capture more detailed error information.
-
Rate Limits: Consider if you're hitting API rate limits and implement retry mechanisms with exponential backoff if necessary.
-
API Support: If the issue persists, reaching out to the support teams for
qianfan-apiandzhipu-apicould provide more insights.
Additionally, review your server_config.py for configurations like HTTPX_DEFAULT_TIMEOUT, DEFAULT_BIND_HOST, and the API_SERVER setup to ensure they align with your network and service requirements. Adjusting these settings or ensuring they're correctly configured might help resolve the connection issues you're experiencing.
Details
Sources
- 使用千文的在线模型时,llm对话可以正常使用但是,知识库也使用千文的模型,但是知识库问答的时候会报API通信遇到错误,这是什么原因啊?
- 服务跑起来了,但是无法正常使用
- [BUG] 咨询下调用类似zhipu、qianfan等在线模型的代理问题
- [BUG] 为什么用智普 api 会报APIConnectionError: Error communicating with OpenAI 的错误
- server/model_workers/qianfan.py
- webui_pages/utils.py
- requirements_api.txt
- configs/server_config.py.example
- requirements.txt
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
超过token上线了吧,4096上限默认