Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

[BUG] 在提问时会只出现一个字,就停了

Open SolarKnight1 opened this issue 1 year ago • 4 comments

问题描述 / Problem Description 用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner. 微信截图_20240223155106

复现问题的步骤 / Steps to Reproduce

  1. 执行 '...' / Run '...' python startup.py -a
  2. 点击 '...' / Click '...' 转到知识库问答
  3. 问题出现 / Problem occurs 在问第二个问题时会只输出一个字

环境信息 / Environment Information

  • langchain-ChatGLM 版本/commit 号:(例如:v2.0.1 或 commit 123456) / langchain-ChatGLM version/commit number: (e.g., v2.0.1 or commit 123456) v0.2.10
  • 是否使用 Docker 部署(是/否):是 / Is Docker deployment used (yes/no): no
  • 使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM-6B / Model used (ChatGLM2-6B / Qwen-7B, etc.): qwen1.5-14b
  • 使用的 Embedding 模型(moka-ai/m3e-base 等):moka-ai/m3e-base / Embedding model used (moka-ai/m3e-base, etc.): bge-large
  • 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss / Vector library used (faiss, milvus, pg_vector, etc.): faiss
  • 操作系统及版本 / Operating system and version:乌班图
  • Python 版本 / Python version: 3.9
  • 其他相关环境信息 / Other relevant environment information:

附加信息 / Additional Information 添加与问题相关的任何其他信息 / Add any other information related to the issue.

SolarKnight1 avatar Feb 23 '24 07:02 SolarKnight1

🤖

嘿,@SolarKnight1!很高兴再次见到你。遇到新问题了吗?让我们一起看看怎么回事。

感谢您的反馈。根据您的描述,当您在知识库问答中提出第二个问题时,Langchain-Chatchat应用程序只输出一个字符。这可能是由于on_tool_start方法处理input_str的方式导致的。

on_tool_start方法中,input_str在第一个停止词的出现处被截断:

stop_words = ["Observation:", "Thought","\"","(", "\n","\t"]
for stop_word in stop_words:
    index = input_str.find(stop_word)
    if index != -1:
        input_str = input_str[:index]
        break

如果第二个问题包含任何这些停止词,它将在该点被截断,如果停止词在问题的开头,可能只会输出一个字符。

要解决这个问题,您可以修改on_tool_start方法以不同的方式处理input_str,例如不在第一个停止词处截断它,或者以不同的方式处理停止词。您可以在这里找到相关的代码。

另外,Langchain-Chatchat应用程序通过维护对话历史来处理连续提出的多个问题。这个历史可以由前端提供,也可以根据对话ID从数据库中获取。无论哪种情况,历史都被用来为语言模型生成提示,然后生成响应。这使得应用程序能够通过保持对话的上下文来处理连续提出的多个问题。您可以在这里找到相关的代码。

希望这些信息对您有所帮助。如果您有任何其他问题或需要进一步的澄清,请随时告诉我。

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Feb 23 '24 07:02 dosubot[bot]

您需要给出详细的日志,可以在config/basicconfig中打开显示全部日志,参数修改为True

glide-the avatar Feb 23 '24 08:02 glide-the

您需要给出详细的日志,可以在config/basicconfig中打开显示全部日志,参数修改为True Batches: 100%|█████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 53.94it/s$ 2024-02-23 16:41:02,165 - base.py[line:36] - INFO: thread 24701 开始操作:tmp12zi2gjf。 2024-02-23 16:41:02,165 - base.py[line:40] - INFO: thread 24701 结束操作:tmp12zi2gjf。 2024-02-23 16:41:02 | INFO | stdout | INFO: 127.0.0.1:34004 - "POST /list_models HTTP/1.1" 200 OK 2024-02-23 16:41:02 | INFO | controller | names: ['http://127.0.0.1:20002'], queue_lens: [2.0], ret: htt$ ://127.0.0.1:20002 2024-02-23 16:41:02 | INFO | stdout | INFO: 127.0.0.1:34006 - "POST /get_worker_address HTTP/1.1" 20$ OK 2024-02-23 16:41:02 | INFO | stdout | INFO: 127.0.0.1:44012 - "POST /model_details HTTP/1.1" 200 OK 2024-02-23 16:41:02 | INFO | stdout | INFO: 127.0.0.1:44014 - "POST /count_token HTTP/1.1" 200 OK 2024-02-23 16:41:02 | INFO | stdout | INFO: 127.0.0.1:58306 - "POST /v1/chat/completions HTTP/1.1" 2$ 0 OK 2024-02-23 16:41:02,213 - _client.py[line:1758] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/cha$ /completions "HTTP/1.1 200 OK" 2024-02-23 16:41:02 | INFO | stdout | INFO: 127.0.0.1:44016 - "POST /worker_generate_stream HTTP/1.1$ 200 OK 2024-02-23 16:41:02 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "H$ TP/1.1 200 OK" 2024-02-23 16:41:02,640 - utils.py[line:38] - ERROR: object of type 'NoneType' has no len() [54/1806] Traceback (most recent call last): File "/home/wyxx/warBackup/ner/Langchain-Chatchat/server/utils.py", line 36, in wrap_done await fn File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 385, in aca$ l raise e File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 379, in aca$ l await self._acall(inputs, run_manager=run_manager) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py", line 275, in _aca$ l response = await self.agenerate([inputs], run_manager=run_manager) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py", line 142, in agene rate return await self.llm.agenerate_prompt( File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 553, in agenerate_prompt return await self.agenerate( File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 513, in agenerate raise exceptions[0] File "/data/software/anaconda3/lib/python3.9/asyncio/tasks.py", line 256, in __step result = coro.send(None) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 616, in _agenerate_with_cache return await self._agenerate(File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate return await agenerate_from_stream(stream_iter) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 86, in agenerate_from_stream async for chunk in stream: File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 493, in _astream if len(chunk["choices"]) == 0: TypeError: object of type 'NoneType' has no len() 2024-02-23 16:41:02,642 - utils.py[line:40] - ERROR: TypeError: Caught exception: object of type 'NoneTyp e' has no len() Traceback (most recent call last): File "/home/wyxx/warBackup/ner/Langchain-Chatchat/server/utils.py", line 36, in wrap_done await fn File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 385, in acal l raise e File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 379, in acal l await self._acall(inputs, run_manager=run_manager) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py", line 275, in _acal lile "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py", line 275, in _acal l response = await self.agenerate([inputs], run_manager=run_manager) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py", line 142, in agene rate return await self.llm.agenerate_prompt( File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 553, in agenerate_prompt return await self.agenerate( File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.$ y", line 513, in agenerate raise exceptions[0] File "/data/software/anaconda3/lib/python3.9/asyncio/tasks.py", line 256, in __step result = coro.send(None)File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 616, in _agenerate_with_cache return await self._agenerate( File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate return await agenerate_from_stream(stream_iter) File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_core/language_models/chat_models.p y", line 86, in agenerate_from_stream async for chunk in stream: File "/data/software/anaconda3/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 493, in _astream if len(chunk["choices"]) == 0: TypeError: object of type 'NoneType' has no len()

SolarKnight1 avatar Feb 23 '24 08:02 SolarKnight1

怎么感觉是你显存炸了或者模型超字数了,建议查看一下显存占用情况

zRzRzRzRzRzRzR avatar Feb 24 '24 02:02 zRzRzRzRzRzRzR