ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[BUG/Help] <更新版本后无法运行,似乎是载入模型失败了>

Open yoshikizh opened this issue 1 year ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Current Behavior

D:\web\chatGLM\ChatGLM-6B> python.exe .\web_demo.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Loading checkpoint shards: 25%|██████████████▎ | 2/8 [00:02<00:06, 1.14s/it] Traceback (most recent call last): File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 797, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 283, in init super().init(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 419, in load_state_dict if f.read(7) == "version": UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 64: illegal multibyte sequence

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\web\chatGLM\ChatGLM-6B\web_demo.py", line 6, in model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda() File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 466, in from_pretrained return model_class.from_pretrained( File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2646, in from_pretrained ) = cls._load_pretrained_model( File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2955, in _load_pretrained_model state_dict = load_state_dict(shard_file) File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 431, in load_state_dict raise OSError( OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin' at 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

Expected Behavior

No response

Steps To Reproduce

python.exe .\web_demo.py

Environment

- OS: windows 10
- Python:3.10.6
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True

Anything else?

No response

yoshikizh avatar Apr 16 '23 08:04 yoshikizh

遇到了同样的错误,请问有解决方法吗

winie-hy avatar Apr 24 '23 07:04 winie-hy

相同的错误+1

disunlike avatar Apr 26 '23 04:04 disunlike

+1

Shukino20001015 avatar Apr 30 '23 05:04 Shukino20001015

可以试一下 https://github.com/THUDM/ChatGLM-6B#%E4%BB%8E%E6%9C%AC%E5%9C%B0%E5%8A%A0%E8%BD%BD%E6%A8%A1%E5%9E%8B

duzx16 avatar Apr 30 '23 15:04 duzx16

也遇到了,重下也不行

WildXBird avatar May 09 '23 16:05 WildXBird

要确认是不是完整的下载了,我是下载的时候出错了。导致加载不成功,重新下了就好了

ray-008 avatar May 11 '23 09:05 ray-008

git lfs pull 方式下载回来的模型没问题,但是如果使用手动从 清华云下载模型,就会有这个报错。

rmrf avatar May 13 '23 04:05 rmrf