Langchain-Chatchat
Langchain-Chatchat copied to clipboard
pytorch权重文件识别为TF2.0文件。pytorch_model-00001-of-00007.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
具体报错信息如下:作系统:Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17. python版本:3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] 项目版本:v0.2.5 langchain版本:0.0.302. fastchat版本:0.2.29
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm2-6b'] @ cpu {'device': 'cpu', 'host': '127.0.0.1', 'infer_turbo': 'vllm', 'model_path': '/home/qianlab03/rjs/Langchain-Chatchat-0.2.7/chatglm2-6b', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cpu ==============================Langchain-Chatchat Configuration==============================
2024-03-09 21:42:02 | INFO | root | 正在启动服务: 2024-03-09 21:42:02 | INFO | root | 如需查看 llm_api 日志,请前往 /cpfs01/zhuhai/home/qianlab03/rjs/Langchain-Chatchat-0.2.5/logs 2024-03-09 21:42:08 | ERROR | stderr | INFO: Started server process [20849] 2024-03-09 21:42:08 | ERROR | stderr | INFO: Waiting for application startup. 2024-03-09 21:42:08 | ERROR | stderr | INFO: Application startup complete. 2024-03-09 21:42:08 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit) Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Process model_worker - chatglm2-6b: Traceback (most recent call last): File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/transformers/modeling_utils.py", line 479, in load_state_dict return torch.load(checkpoint_file, map_location=map_location) File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/torch/serialization.py", line 777, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/torch/serialization.py", line 282, in init super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/transformers/modeling_utils.py", line 483, in load_state_dict if f.read(7) == "version": File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/cpfs01/zhuhai/home/qianlab03/rjs/Langchain-Chatchat-0.2.5/startup.py", line 366, in run_model_worker app = create_model_worker_app(log_level=log_level, **kwargs) File "/cpfs01/zhuhai/home/qianlab03/rjs/Langchain-Chatchat-0.2.5/startup.py", line 194, in create_model_worker_app worker = ModelWorker( File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 221, in init self.model, self.tokenizer = load_model( File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 288, in load_model model, tokenizer = adapter.load_model(model_path, kwargs) File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 692, in load_model model = AutoModel.from_pretrained( File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 511, in from_pretrained return model_class.from_pretrained( File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained ) = cls._load_pretrained_model( File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3456, in _load_pretrained_model state_dict = load_state_dict(shard_file) File "/home/qianlab03/anaconda3/envs/rag_25/lib/python3.10/site-packages/transformers/modeling_utils.py", line 495, in load_state_dict raise OSError( OSError: Unable to load weights from pytorch checkpoint file for '/home/qianlab03/rjs/Langchain-Chatchat-0.2.7/chatglm2-6b/pytorch_model-00001-of-00007.bin' at '/home/qianlab03/rjs/Langchain-Chatchat-0.2.7/chatglm2-6b/pytorch_model-00001-of-00007.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
嘿,@iaoxuesheng!很高兴再次见到你。希望你最近过得不错!
您遇到的问题可能是由于尝试加载的模型版本(Langchain-Chatchat-0.2.7中的chatglm2-6b)与您正在使用的Langchain-Chatchat版本(v0.2.5)之间存在不匹配或不兼容。
要解决此问题,您应确保:
-
模型检查点文件正确放置并可以在预期的目录结构中访问。根据错误消息,模型似乎应该在与Langchain-Chatchat-0.2.7相关的目录中,但是文件的组织或命名方式可能存在问题,尤其是因为检查点文件是多部分检查点的一部分(
pytorch_model-00001-of-00007.bin
表明它是七个部分之一)。 -
您正在使用的Langchain-Chatchat版本(v0.2.5)与您正在尝试加载的模型版本兼容。如果在Langchain-Chatchat的版本0.2.5和0.2.7之间更改了模型架构或加载模型的方式,您可能需要将Langchain-Chatchat安装更新到版本0.2.7,或确保模型文件与版本0.2.5兼容。
-
model_config.py
中的配置(或您设置中的等效项)正确指向您正在尝试加载的模型。如果您修改了MODEL_ROOT_PATH
或其他相关设置,请确保它们正确反映了您正在尝试使用的模型的位置和版本。
如果这些步骤无法解决问题,您可能需要提供更具体的错误消息或有关您已采取的步骤和错误的确切性质的详细信息,以获得更有针对性的帮助。
另外,我在Langchain-Chatchat的存储库中找到了一些类似的问题,可能对您有所帮助。这些问题中的一些提到了类似的错误,并提供了一些可能的解决方案。您可以参考以下链接:
希望这些信息对您有所帮助。如果您有任何其他问题或需要进一步的澄清,请随时告诉我。
Sources
- chatglm3-6b模型加载失败
-
遇到一点小问题[BUG]:ChatGLM2-6B-->ptuning 微调模型 按照
微调模型加载实操
出错误 - [BUG] chatglm2-6b 模型加载失败
- configs/model_config.py.example
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
这个问题已经被标记为 stale
,因为它已经超过 30 天没有任何活动。