Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

前一天晚上配置好能跑之后,第二天重新启动就报错了,,,

Open Nikolahuang opened this issue 1 year ago • 7 comments

这是啥问题啊,拒绝访问了 image

Nikolahuang avatar Feb 17 '24 02:02 Nikolahuang

🤖

嘿,@AndersonHuangEDU!好久不见,又遇到一些技术挑战了吗?😄 抱歉听说你遇到了问题,别担心,我正在查看,很快就会回复你具体的解决方案。

dosubot[bot] avatar Feb 17 '24 02:02 dosubot[bot]

2024-02-17 10:18:41,010 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接。 2024-02-17 10:18:43,028 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接。 2024-02-17 10:18:45,041 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接。 2024-02-17 10:18:47,155 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接。 2024-02-17 10:18:49,173 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接。 2024-02-17 10:18:51,190 - utils.py[line:95] - ERROR: ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接。 2024-02-17 10:18:51.190 Uncaught app exception Traceback (most recent call last): File "C:\Users\17769.conda\envs\env\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script exec(code, module.dict) File "D:\BaiduNetdiskDownload\langchain_chatchatv2_anderson\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "D:\BaiduNetdiskDownload\langchain_chatchatv2_anderson\webui_pages\dialogue\dialogue.py", line 163, in dialogue_page running_models = list(api.list_running_models()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: 'NoneType' object is not iterable

Nikolahuang avatar Feb 17 '24 02:02 Nikolahuang

🤖

Based on the error message you provided, it seems like the Langchain-Chatchat software is unable to establish a connection to the target computer, and the function api.list_running_models() is returning a None object when it's expected to return an iterable object.

The error "ConnectError: error when post /llm_model/list_running_models: [WinError 10061] 由于目标计算机积极拒绝,无法连接" indicates that the program is unable to establish a connection to the target computer. This could be due to several reasons such as the server not running, firewall blocking the connection, or network issues. You can check if the server is running and accessible, check your firewall settings, and ensure that you have a stable network connection.

The TypeError: 'NoneType' object is not iterable error usually occurs when you try to iterate over a None object. This could be due to a function returning None when you're expecting it to return an iterable object. In this case, you need to check the function api.list_running_models() to make sure it is returning an iterable object and not None. You can add print statements or use a debugger to check the return values of your functions.

running_models = list(api.list_running_models())

If the function is returning None, you need to debug the function to find out why it is not returning the expected result.

The api.list_running_models() function in the Langchain-Chatchat codebase is used to retrieve a list of currently loaded models and their configurations from the Fastchat controller. If an error occurs during this process, it logs the error and returns a BaseResponse with an error message.

Here is the function for your reference:

def list_running_models(
    controller_address: str = Body(None, description="Fastchat controller服务器地址", examples=[fschat_controller_address()]),
    placeholder: str = Body(None, description="该参数未使用,占位用"),
) -> BaseResponse:
    '''
    从fastchat controller获取已加载模型列表及其配置项
    '''
    try:
        controller_address = controller_address or fschat_controller_address()
        with get_httpx_client() as client:
            r = client.post(controller_address + "/list_models")
            models = r.json()["models"]
            data = {m: get_model_config(m).data for m in models}
            return BaseResponse(data=data)
    except Exception as e:
        logger.error(f'{e.__class__.__name__}: {e}',
                        exc_info=e if log_verbose else None)
        return BaseResponse(
            code=500,
            data={},
            msg=f"failed to get available models from controller: {controller_address}。错误信息是: {e}")

You can find this function in the server/llm_api.py file.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Feb 17 '24 02:02 dosubot[bot]

重启程序

zRzRzRzRzRzRzR avatar Feb 20 '24 04:02 zRzRzRzRzRzRzR

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

github-actions[bot] avatar Mar 21 '24 21:03 github-actions[bot]

重启程序

不行,重启也是一样,重新拉代码跑还会报错,这是什么原因,怎么解决?

zmwstu avatar Apr 10 '24 17:04 zmwstu

兄弟,解决了么,求助

sh66666 avatar Apr 29 '24 06:04 sh66666

估计是全部用了API的问题,一般不会出现。尝试选择一个本地模型进行启动

zRzRzRzRzRzRzR avatar May 02 '24 12:05 zRzRzRzRzRzRzR

估计是全部用了API的问题,一般不会出现。尝试选择一个本地模型进行启动

我按着readme来也是出现了这错误,我还没有配置智谱API和OpenAI API 。我是直接指定了GLM3的绝对地址跟向量模型的地址,发现了这问题。看着好多人都是这问题,这会是什么原因呢?

Micla-SHL avatar May 10 '24 18:05 Micla-SHL

估计是全部用了API的问题,一般不会出现。尝试选择一个本地模型进行启动

我按着readme来也是出现了这错误,我还没有配置智谱API和OpenAI API 。我是直接指定了GLM3的绝对地址跟向量模型的地址,发现了这问题。看着好多人都是这问题,这会是什么原因呢?

我今天在另外一台配置,操作一样,是不会出现这问题的。下面我说一下配置上的两台PC的区别,昨天失败的那台配置了梯子,采取wifi上网没有接网线,其余操作跟今天操作成功的电脑一致,希望对他人有启发。另外希望新版本不会有这问题

Micla-SHL avatar May 11 '24 06:05 Micla-SHL