llama.cpp
llama.cpp copied to clipboard
RuntimeError: PytorchStreamReader failed reading zip archive: not a ZIP archive
Hello, I try to # convert the 7B model to ggml FP16 format but I found a problem? Is that because of the model problem? 🙏🏻
python3 convert-pth-to-ggml.py models/7B/ 1
.
├── CMakeLists.txt
├── LICENSE
├── Makefile
├── README.md
├── convert-pth-to-ggml.py
├── ggml.c
├── ggml.h
├── ggml.o
├── main
├── main.cpp
├── models
│ ├── 7B
│ │ ├── checklist.chk
│ │ ├── consolidated.00.pth
│ │ └── params.json
│ ├── tokenizer.model
│ └── tokenizer_checklist.chk
├── quantize
├── quantize.cpp
├── quantize.sh
├── utils.cpp
├── utils.h
└── utils.o
(Lab2) @-MacBook-Pro llama.cpp % python convert-pth-to-ggml.py models/7B/ 1
{'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-06, 'vocab_size': 32000}
n_parts = 1
Processing part 0
Traceback (most recent call last):
File "Lab2/llama.cpp/convert-pth-to-ggml.py", line 88, in <module>
model = torch.load(fname_model, map_location="cpu")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Lab2/lib/python3.11/site-packages/torch/serialization.py", line 799, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Lab2/lib/python3.11/site-packages/torch/serialization.py", line 285, in __init__
super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: not a ZIP archive
(Lab2) @-MacBook-Pro llama.cpp %
Same issue, and I fix it by redownload the 7B model.
https://github.com/facebookresearch/llama/pull/73
First time, I download by ipfs, and second time, I download by magnet, the problem was gone.
Hope it helpful
Same issue, and I fix it by redownload the 7B model.
First time, I download by ipfs, and second time, I download by magnet, the problem was gone.
Hope it helpful
Wow, Thank you! It actually work!