Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

[BUG] 简洁阐述问题 / Concise description of the issue

Open cstkn opened this issue 11 months ago • 3 comments

2024-03-10 09:54:49 | INFO | model_worker | Loading the model ['chatglm-6b-int4'] on worker 88957e44 ... 2024-03-10 09:54:49 | ERROR | stderr | Process model_worker - chatglm-6b-int4: 2024-03-10 09:54:49 | ERROR | stderr | Traceback (most recent call last): 2024-03-10 09:54:49 | ERROR | stderr | File "C:\Users\lycor\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 314, in _bootstrap 2024-03-10 09:54:49 | ERROR | stderr | self.run() 2024-03-10 09:54:49 | ERROR | stderr | File "C:\Users\lycor\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 108, in run 2024-03-10 09:54:49 | ERROR | stderr | self._target(*self._args, **self._kwargs) 2024-03-10 09:54:49 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat\startup.py", line 389, in run_model_worker 2024-03-10 09:54:49 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) 2024-03-10 09:54:49 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 09:54:49 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat\startup.py", line 217, in create_model_worker_app 2024-03-10 09:54:49 | ERROR | stderr | worker = ModelWorker( 2024-03-10 09:54:49 | ERROR | stderr | ^^^^^^^^^^^^ 2024-03-10 09:54:49 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\fastchat\serve\model_worker.py", line 77, in init 2024-03-10 09:54:49 | ERROR | stderr | self.model, self.tokenizer = load_model( 2024-03-10 09:54:49 | ERROR | stderr | ^^^^^^^^^^^ 2024-03-10 09:54:49 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\fastchat\model\model_adapter.py", line 257, in load_model 2024-03-10 09:54:49 | ERROR | stderr | raise ValueError(f"Invalid device: {device}") 2024-03-10 09:54:49 | ERROR | stderr | ValueError: Invalid device: auto 尝试自己更改模型,chatglm-6b-int4下载后放在指定文件夹中,更改了LLM_MODELS = ["chatglm-6b-int4", "zhipu-api", "openai-api"],出现以上报错

cstkn avatar Mar 10 '24 02:03 cstkn

嗨,@cstkn,很高兴再次见到你!希望你一切都好。

您遇到的 "Invalid device: auto" 错误可能是由于系统无法识别或正确配置您尝试使用的 "chatglm-6b-int4" 模型。这可能有几个原因:

  1. 模型配置缺失:在配置文件的 MODEL_PATHSUPPORT_AGENT_MODEL 部分可能没有 "chatglm-6b-int4" 的配置。提供的上下文显示了各种模型的配置,但 "chatglm-6b-int4" 并未列在其中。要想让系统正确识别和加载模型,需要在这些部分正确配置模型。

  2. 模型名称或路径错误:如果 "chatglm-6b-int4" 的模型名称或路径错误,或者没有正确放在 MODEL_ROOT_PATH 所期望的目录中,系统将无法定位和加载模型,可能会导致错误。

  3. 设备兼容性问题:错误消息暗示设备兼容性问题("Invalid device: auto")。这表明您尝试使用的模型的自动设备检测机制可能存在问题。然而,提供的上下文并未显示与 "chatglm-6b-int4" 直接相关的配置,会明确导致此错误。

要解决此问题,请确保以下几点:

  • 模型配置:如果是自定义或本地存储的模型,将 "chatglm-6b-int4" 模型配置添加到 MODEL_PATH 和/或 SUPPORT_AGENT_MODEL 部分。这涉及为模型指定正确的路径或标识符,以便系统能够识别和加载它。

  • 正确的模型名称和路径:验证 "chatglm-6b-int4" 的模型名称是否正确,以及模型文件是否放在 MODEL_ROOT_PATH 设置期望的正确目录中。

  • 设备设置:由于错误提到了无效的设备设置,您可能需要手动将 LLM_DEVICE 设置为特定设备("cuda","mps","cpu","xpu")而不是 "auto",看看是否能解决问题。这个改变将有助于确定自动设备检测是否是问题的源头。

没有直接访问系统或更详细的错误日志,这些建议基于提供的上下文和 Langchain-Chatchat 等系统中模型的常见配置要求。

参考资料:

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Mar 10 '24 02:03 dosubot[bot]

@dosu-bot t添加了一些路径仍然出现以下报错 2024-03-10 11:18:39 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 9011040e ... 2024-03-10 11:18:39 | INFO | model_worker | Loading the model ['chatglm-6b-int4'] on worker ab55ae25 ... 2024-03-10 11:18:39 | ERROR | stderr | Process model_worker - chatglm-6b-int4: 2024-03-10 11:18:39 | ERROR | stderr | Traceback (most recent call last): 2024-03-10 11:18:39 | ERROR | stderr | File "C:\Users\lycor\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 314, in _bootstrap 2024-03-10 11:18:39 | ERROR | stderr | self.run() 2024-03-10 11:18:39 | ERROR | stderr | File "C:\Users\lycor\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 108, in run 2024-03-10 11:18:39 | ERROR | stderr | self._target(*self._args, **self._kwargs) 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat\startup.py", line 389, in run_model_worker 2024-03-10 11:18:39 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs) 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat\startup.py", line 217, in create_model_worker_app 2024-03-10 11:18:39 | ERROR | stderr | worker = ModelWorker( 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\fastchat\serve\model_worker.py", line 77, in init 2024-03-10 11:18:39 | ERROR | stderr | self.model, self.tokenizer = load_model( 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\fastchat\model\model_adapter.py", line 348, in load_model 2024-03-10 11:18:39 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\fastchat\model\model_adapter.py", line 823, in load_model 2024-03-10 11:18:39 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained( 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 801, in from_pretrained 2024-03-10 11:18:39 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2029, in from_pretrained 2024-03-10 11:18:39 | ERROR | stderr | return cls._from_pretrained( 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2261, in _from_pretrained 2024-03-10 11:18:39 | ERROR | stderr | tokenizer = cls(*init_inputs, **init_kwargs) 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "C:\Users\lycor.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\tokenization_chatglm.py", line 196, in init 2024-03-10 11:18:39 | ERROR | stderr | super().init( 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\transformers\tokenization_utils.py", line 367, in init 2024-03-10 11:18:39 | ERROR | stderr | self._add_tokens( 2024-03-10 11:18:39 | ERROR | stderr | File "D:\wujiaping\chat\Langchain-Chatchat.venv\Lib\site-packages\transformers\tokenization_utils.py", line 467, in _add_tokens 2024-03-10 11:18:39 | ERROR | stderr | current_vocab = self.get_vocab().copy() 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "C:\Users\lycor.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\tokenization_chatglm.py", line 248, in get_vocab 2024-03-10 11:18:39 | ERROR | stderr | vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | File "C:\Users\lycor.cache\huggingface\modules\transformers_modules\chatglm-6b-int4\tokenization_chatglm.py", line 244, in vocab_size 2024-03-10 11:18:39 | ERROR | stderr | return self.sp_tokenizer.num_tokens 2024-03-10 11:18:39 | ERROR | stderr | ^^^^^^^^^^^^^^^^^ 2024-03-10 11:18:39 | ERROR | stderr | AttributeError: 'ChatGLMTokenizer' object has no attribute 'sp_tokenizer'

cstkn avatar Mar 10 '24 03:03 cstkn

Based on the error message you provided, it seems like the 'ChatGLMTokenizer' object is missing the 'sp_tokenizer' attribute. This error is usually caused by an issue with the tokenizer file associated with the model. Here are a few steps you can take to resolve this issue:

  1. Check the Tokenizer File: Ensure that the tokenizer file associated with the 'chatglm-6b-int4' model is present in the model directory and is correctly named. The tokenizer file is usually named 'tokenizer.json' or 'tokenizer.model'. If the tokenizer file is missing or incorrectly named, the model will not load correctly.

  2. Verify the Model and Tokenizer Compatibility: The 'chatglm-6b-int4' model and its tokenizer should be compatible with the version of the transformers library you are using. If the model or tokenizer is not compatible, you may need to update your transformers library or use a different version of the model that is compatible with your current transformers library.

  3. Update the Transformers Library: If you are using an older version of the transformers library, you may need to update it to a newer version that supports the 'chatglm-6b-int4' model and its tokenizer. You can update the transformers library using the following command:

    pip install --upgrade transformers
    
  4. Re-download the Model and Tokenizer: If the above steps do not resolve the issue, the model or tokenizer file may be corrupted. In this case, you should re-download the 'chatglm-6b-int4' model and its tokenizer from the original source and replace the existing files in your model directory.

If you have followed these steps and the error still persists, it would be helpful if you could provide more information about your setup, such as the version of the transformers library you are using and the exact structure of your 'chatglm-6b-int4' model directory.

For reference, you can check out these similar issues in the Langchain-Chatchat repository: Issue #1835 and Issue #2181. These issues discuss similar errors and their solutions might be applicable to your situation.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Mar 10 '24 03:03 dosubot[bot]

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

github-actions[bot] avatar Apr 09 '24 21:04 github-actions[bot]