Langchain-Chatchat
Langchain-Chatchat copied to clipboard
LLM对话没问题,知识库对话报错peer closed connection without sending complete message body (incomplete chunked read)
LLM对话没问题,知识库对话报错
2024-03-15 09:31:42,966 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:41506 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:31:42,967 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-03-15 09:31:43,090 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:41506 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:31:43,091 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:41506 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-15 09:31:43,094 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" 2024-03-15 09:31:48,448 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52666 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:31:48,449 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-03-15 09:31:48,474 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52666 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:31:48,476 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52666 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-15 09:31:48,478 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52666 - "POST /chat/chat HTTP/1.1" 200 OK 2024-03-15 09:31:48,529 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" 2024-03-15 09:31:48 | INFO | stdout | INFO: 127.0.0.1:36524 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2024-03-15 09:31:48,555 - _client.py[line:1758] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK" 2024-03-15 09:31:48 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2024-03-15 09:31:57,468 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52680 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:31:57,468 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-03-15 09:31:57,487 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52680 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:31:57,488 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52680 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-15 09:31:57,490 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:52680 - "GET /knowledge_base/list_knowledge_bases HTTP/1.1" 200 OK 2024-03-15 09:31:57,493 - _client.py[line:1027] - INFO: HTTP Request: GET http://127.0.0.1:7861/knowledge_base/list_knowledge_bases "HTTP/1.1 200 OK" 2024-03-15 09:32:01,124 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:51750 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:32:01,125 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-03-15 09:32:01,144 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:51750 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-15 09:32:01,145 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:51750 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-15 09:32:01,148 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:51750 - "GET /knowledge_base/list_knowledge_bases HTTP/1.1" 200 OK 2024-03-15 09:32:01,152 - _client.py[line:1027] - INFO: HTTP Request: GET http://127.0.0.1:7861/knowledge_base/list_knowledge_bases "HTTP/1.1 200 OK" INFO: 127.0.0.1:51750 - "POST /chat/knowledge_base_chat HTTP/1.1" 200 OK 2024-03-15 09:32:01,251 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/knowledge_base_chat "HTTP/1.1 200 OK" ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 269, in call await wrap(partial(self.listen_for_disconnect, receive)) File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect message = await receive() ^^^^^^^^^^^^^^^ File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 587, in receive await self.message_event.wait() File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/asyncio/locks.py", line 213, in wait await fut asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f17d9a8c850
During handling of the above exception, another exception occurred:
- Exception Group Traceback (most recent call last): | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi | result = await app( # type: ignore[func-returns-value] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call | return await self.app(scope, receive, send) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call | await super().call(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/applications.py", line 119, in call | await self.middleware_stack(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call | raise exc | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call | await self.app(scope, receive, _send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app | raise exc | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | await app(scope, receive, sender) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 762, in call | await self.middleware_stack(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 782, in app | await route.handle(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle | await self.app(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 77, in app | await wrap_app_handling_exceptions(app, request)(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app | raise exc | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | await app(scope, receive, sender) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 75, in app | await response(scope, receive, send) | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in call | async with anyio.create_task_group() as task_group: | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in aexit | raise BaseExceptionGroup( | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap | await func() | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response | async for data in self.body_iterator: | File "/home/zwy/workspace/chat_glm/Langchain-Chatchat/server/chat/knowledge_base_chat.py", line 81, in knowledge_base_chat_iterator | docs = await run_in_threadpool(search_docs, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/concurrency.py", line 40, in run_in_threadpool | return await anyio.to_thread.run_sync(func, *args) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync | return await get_async_backend().run_sync_in_worker_thread( | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread | return await future | ^^^^^^^^^^^^ | File "/home/zwy/anaconda3/envs/langchain/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run | result = context.run(func, *args) | ^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/workspace/chat_glm/Langchain-Chatchat/server/knowledge_base/kb_doc_api.py", line 38, in search_docs | docs = kb.search_docs(query, top_k, score_threshold) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/workspace/chat_glm/Langchain-Chatchat/server/knowledge_base/kb_service/base.py", line 181, in search_docs | docs = self.do_search(query, top_k, score_threshold) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/workspace/chat_glm/Langchain-Chatchat/server/knowledge_base/kb_service/faiss_kb_service.py", line 66, in do_search | embeddings = embed_func.embed_query(query) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/home/zwy/workspace/chat_glm/Langchain-Chatchat/server/knowledge_base/kb_service/base.py", line 424, in embed_query | query_embed = embeddings[0] | ~~~~~~~~~~^^^ | TypeError: 'NoneType' object is not subscriptable +------------------------------------ 2024-03-15 09:32:01,255 - utils.py[line:197] - ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
Hey there, @GamePlayerScript! 👋 I'm Dosu, your friendly neighborhood bot here to lend a hand with bugs, questions, and guiding you on your way to becoming a contributor. While we're waiting for a human maintainer to jump in, I'm here to help out. I've caught sight of the issue you're facing and will be diving into it to give you a detailed response shortly. Stay tuned!
Based on the error message you provided, it seems like the issue is related to the HTTP connection where the server is closing the connection prematurely. This can happen due to various reasons such as network issues, server configuration problems, or issues with the request itself.
The error message suggests that the 'embeddings' object is of 'NoneType' and not subscriptable. This implies that the embed_query method of the EmbeddingsFunAdapter class, which is supposed to return the embeddings, is returning None. This could be due to an issue with the embedding model or the query that is being passed to it.
Here are some steps you can take to troubleshoot this issue:
-
Check the Embedding Model: Ensure that the embedding model is correctly initialized and is capable of generating embeddings for the given query. You can do this by testing the model separately with a sample query.
-
Check the Query: Make sure that the query being passed to the
embed_querymethod is in the correct format expected by the model. Also, check if the query is not empty or null. -
Error Handling and Logging: Implement or enhance error handling and logging around the part of your code that makes the HTTP request. This can help pinpoint whether the issue is with the request itself or the response from the server.
-
Check External Service/API: If the embedding process involves an external service, ensure that the service is operational and that you're using it correctly according to its documentation. This includes checking the endpoint URLs, request payload formats, and any required headers.
-
Network Issues: Investigate any potential network issues that might be causing the connection to drop. This could involve checking firewalls, proxies, or any network policies that might affect the connection.
-
Request Timeout: If the service you're connecting to takes a long time to respond, consider adjusting the timeout settings for your HTTP request to give it more time to complete.
-
Server Configuration: If you have control over the external service, check its configuration to ensure it's capable of handling the requests correctly and not closing connections prematurely.
I also found some similar issues in the Langchain-Chatchat repository that might be helpful:
- ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
- [BUG] 救救孩子吧 变成在线LLM 和 Embedding 后 Exception in ASGI application 还有超时
- API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
These issues involve similar errors and the solutions might be found in the discussions of these issues.
If you have already followed these steps and are still encountering issues, it may require more information to assist further, such as whether you have set environment variables elsewhere or used the API key in other parts of the code.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
Sources
- ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
- [BUG] 救救孩子吧 变成在线LLM 和 Embedding 后 Exception in ASGI application 还有超时
- API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
- [BUG] RemoteProtocolError: Caught exception: peer closed connection without sending
- LLM和向量化模型都使用openai-api,创建知识库出错。
- 使用知识库问答时报错
- server/knowledge_base/kb_service/faiss_kb_service.py
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
超过长度了,修改maxtoken变大就行
超过长度了,修改maxtoken变大就行
请问在哪修改maxtoken?