Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

python startup.py -a,程序运行后无法开展对话,无论是LLM还是本地知识库

Open winer3086 opened this issue 11 months ago • 1 comments

无标题

==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.22631-SP0. python版本:3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:34:57) [MSC v.1936 64 bit (AMD64)] 项目版本:v0.2.7 langchain版本:0.1.11. fastchat版本:0.2.32

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm2-6b', 'zhipu-api', 'openai-api'] @ cpu {'device': 'cpu', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'E:\models\chatglm2-6b', 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'chatglm_turbo', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'model_name': 'gpt-35-turbo', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: m3e-base @ cpu

服务端运行信息: OpenAI API Server: http://127.0.0.1:20000/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://127.0.0.1:8501 ==============================Langchain-Chatchat Configuration==============================

You can now view your Streamlit app in your browser.

URL: http://127.0.0.1:8501

INFO: 127.0.0.1:55086 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-17 15:08:42,622 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:08:42,720 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55086 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:08:42,721 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" received input message: {'history': [], 'max_tokens': None, 'model_name': 'chatglm2-6b', 'prompt_name': 'default', 'query': '你是谁', 'stream': True, 'temperature': 0.7} INFO: 127.0.0.1:55086 - "POST /chat/chat HTTP/1.1" 200 OK 2024-03-17 15:08:42,730 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" D:\Anaconda\envs\langchain\lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI. warn_deprecated( D:\Anaconda\envs\langchain\lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: The function acall was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use ainvoke instead. warn_deprecated( 2024-03-17 15:08:43 | INFO | stdout | INFO: 127.0.0.1:55089 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2024-03-17 15:08:43,100 - _client.py[line:1773] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK" 2024-03-17 15:08:43 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2024-03-17 15:08:43,561 - utils.py[line:25] - ERROR: TypeError: Caught exception: object of type 'NoneType' has no len() {'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:08:50,043 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55097 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:08:50,045 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55097 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-17 15:08:50,047 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:08:50,142 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55097 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:08:50,143 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" received input message: {'history': [{'content': '你是谁', 'role': 'user'}, {'content': '', 'role': 'assistant'}], 'max_tokens': None, 'model_name': 'chatglm2-6b', 'prompt_name': 'default', 'query': '你是谁', 'stream': True, 'temperature': 0.7} INFO: 127.0.0.1:55097 - "POST /chat/chat HTTP/1.1" 200 OK 2024-03-17 15:08:50,152 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" 2024-03-17 15:08:50 | INFO | stdout | INFO: 127.0.0.1:55100 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2024-03-17 15:08:50,382 - _client.py[line:1773] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK" 2024-03-17 15:08:50 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2024-03-17 15:08:50,485 - utils.py[line:25] - ERROR: TypeError: Caught exception: object of type 'NoneType' has no len() 2024-03-17 15:08:50.486 Uncaught app exception Traceback (most recent call last): File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 541, in run_script exec(code, module.dict) File "E:\Langchain-Chatchat-0.2.7\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "E:\Langchain-Chatchat-0.2.7\webui_pages\dialogue\dialogue.py", line 223, in dialogue_page chat_box.show_feedback(**feedback_kwargs, File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback return streamlit_feedback(**kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_feedback_init.py", line 104, in streamlit_feedback component_value = _component_func( File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 80, in call return self.create_instance(*args, default=default, key=key, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\metrics_util.py", line 367, in wrapped_func result = non_optional_func(*args, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 241, in create_instance return_value = marshall_component(dg, element) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 212, in marshall_component component_state = register_widget( File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget return register_widget(*args, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\state\widgets.py", line 161, in register_widget return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\state\widgets.py", line 194, in register_widget_from_metadata raise DuplicateWidgetID( streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same key=''.

To fix this, please make sure that the key argument is unique for each widget you create. {'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:13:05,657 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55168 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:13:05,658 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55168 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-17 15:13:05,660 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:13:05,757 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55168 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:13:05,759 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55168 - "GET /knowledge_base/list_knowledge_bases HTTP/1.1" 200 OK 2024-03-17 15:13:05,763 - _client.py[line:1026] - INFO: HTTP Request: GET http://127.0.0.1:7861/knowledge_base/list_knowledge_bases "HTTP/1.1 200 OK" 2024-03-17 15:13:05.767 Uncaught app exception Traceback (most recent call last): File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 541, in run_script exec(code, module.dict) File "E:\Langchain-Chatchat-0.2.7\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "E:\Langchain-Chatchat-0.2.7\webui_pages\dialogue\dialogue.py", line 178, in dialogue_page chat_box.output_messages() File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_chatbox\messages.py", line 337, in output_messages self.show_feedback(history_index=i, **feedback_kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback return streamlit_feedback(**kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_feedback_init.py", line 104, in streamlit_feedback component_value = _component_func( File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 80, in call return self.create_instance(*args, default=default, key=key, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\metrics_util.py", line 367, in wrapped_func result = non_optional_func(*args, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 241, in create_instance return_value = marshall_component(dg, element) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 212, in marshall_component component_state = register_widget( File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget return register_widget(*args, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\state\widgets.py", line 161, in register_widget return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\state\widgets.py", line 194, in register_widget_from_metadata raise DuplicateWidgetID( streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same key=''.

To fix this, please make sure that the key argument is unique for each widget you create. {'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:13:19,996 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55174 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:13:19,997 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55174 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-17 15:13:19,999 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" {'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}} 2024-03-17 15:13:20,096 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55174 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-17 15:13:20,097 - _client.py[line:1026] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:55174 - "GET /knowledge_base/list_knowledge_bases HTTP/1.1" 200 OK 2024-03-17 15:13:20,101 - _client.py[line:1026] - INFO: HTTP Request: GET http://127.0.0.1:7861/knowledge_base/list_knowledge_bases "HTTP/1.1 200 OK" 2024-03-17 15:13:20.105 Uncaught app exception Traceback (most recent call last): File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 541, in run_script exec(code, module.dict) File "E:\Langchain-Chatchat-0.2.7\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "E:\Langchain-Chatchat-0.2.7\webui_pages\dialogue\dialogue.py", line 178, in dialogue_page chat_box.output_messages() File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_chatbox\messages.py", line 337, in output_messages self.show_feedback(history_index=i, **feedback_kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback return streamlit_feedback(**kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_feedback_init.py", line 104, in streamlit_feedback component_value = _component_func( File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 80, in call return self.create_instance(*args, default=default, key=key, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\metrics_util.py", line 367, in wrapped_func result = non_optional_func(*args, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 241, in create_instance return_value = marshall_component(dg, element) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\components\v1\components.py", line 212, in marshall_component component_state = register_widget( File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget return register_widget(*args, **kwargs) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\state\widgets.py", line 161, in register_widget return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type) File "D:\Anaconda\envs\langchain\lib\site-packages\streamlit\runtime\state\widgets.py", line 194, in register_widget_from_metadata raise DuplicateWidgetID( streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same key=''.

To fix this, please make sure that the key argument is unique for each widget you create.

winer3086 avatar Mar 17 '24 07:03 winer3086

嘿,@winer3086!很高兴再次看到你在这里提问。希望你最近一切都好!

根据您提供的信息,您在使用 python startup.py -a 命令启动对话时遇到了 TypeErrorstreamlit.errors.DuplicateWidgetID 错误。这可能是由于以下几个原因:

  1. TypeError:这种错误可能是由于您的代码中某个参数或变量的预期类型不匹配。但是,如果没有具体的回溯指向您的 server/api.py 文件中 TypeError 发生的位置,就很难确定确切的原因。提供的代码并没有直接与 Streamlit 交互,也没有显示出与命令执行相关的 TypeError 的明显原因。

  2. DuplicateWidgetID 错误streamlit.errors.DuplicateWidgetID 错误表明在单个应用程序会话中创建了具有相同键的 Streamlit 小部件。如果您在循环或条件中动态生成小部件,而没有为每个小部件实例确保唯一的键,就可能发生这种情况。然而,提供的代码并没有直接包含任何 Streamlit 小部件的定义或使用。这个错误可能与您的应用程序的另一部分有关,该部分没有包含在上下文中。Streamlit 在提供的 server/api.py 文件中并没有直接引用,这表明问题可能源于使用 Streamlit 进行前端交互的另一个文件。

鉴于提供的上下文,重要的是要检查以下几点:

  • 确保任何 Streamlit 小部件(例如,st.text_inputst.button)在动态或条件创建时都有一个唯一的键。
  • 查看 TypeError 的完整错误回溯,以确定导致问题的确切行和操作。这将有助于理解它是否与您的 FastAPI 端点中传递的数据类型不正确或您的应用程序的其他部分有关。

没有具体的错误消息或回溯,而且提供的代码并没有直接引用 Streamlit 小部件的创建或可能导致与命令执行相关的 TypeError 的操作,所以很难提供更精确的解决方案。

如果您能提供更详细的错误信息,例如 TypeErrorstreamlit.errors.DuplicateWidgetID 错误的完整回溯,以及可能涉及的任何其他代码文件,我可能会更好地帮助您解决这个问题。

另外,您正在使用的 Langchain-Chatchat 的版本是 v0.2.7,而最新的版本是 v0.2.10。在这两个版本之间可能已经修复了一些错误,因此建议您尝试更新到最新版本,看看问题是否仍然存在。

参考资料:

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Mar 17 '24 07:03 dosubot[bot]

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

github-actions[bot] avatar Apr 16 '24 21:04 github-actions[bot]