pyllama icon indicating copy to clipboard operation
pyllama copied to clipboard

Sorry,I can't run

Open KingingWang opened this issue 2 years ago • 8 comments

(llama) -bash-4.2$ python inference.py --ckpt_dir ./models/7B --tokenizer_path ./models/tokenizer.model Traceback (most recent call last): File "/home/ycshu_wlxy/kingingwang/pyllama-main/inference.py", line 67, in run( File "/home/ycshu_wlxy/kingingwang/pyllama-main/inference.py", line 47, in run generator = load(ckpt_dir, tokenizer_path, local_rank, world_size, max_seq_len, max_batch_size) File "/home/ycshu_wlxy/kingingwang/pyllama-main/inference.py", line 22, in load checkpoint = torch.load(ckpt_path, map_location="cpu") File "/home/ycshu_wlxy/.conda/envs/llama/lib/python3.10/site-packages/torch/serialization.py", line 789, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/ycshu_wlxy/.conda/envs/llama/lib/python3.10/site-packages/torch/serialization.py", line 1131, in _load result = unpickler.load() File "/home/ycshu_wlxy/.conda/envs/llama/lib/python3.10/site-packages/torch/serialization.py", line 1101, in persistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "/home/ycshu_wlxy/.conda/envs/llama/lib/python3.10/site-packages/torch/serialization.py", line 1079, in load_tensor storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage).storage().untyped() RuntimeError: PytorchStreamReader failed reading file data/22: invalid header or archive is corrupted

KingingWang avatar Mar 09 '23 07:03 KingingWang

Are your model files valid?

juncongmoo avatar Mar 09 '23 08:03 juncongmoo

invalid header or archive is corrupted model file is corrupted

vo2021 avatar Mar 09 '23 08:03 vo2021

I prepare to re-download the pth file

------------------ Original ------------------ From: Juncong Moo @.> Date: Thu,Mar 9,2023 4:52 PM To: juncongmoo/pyllama @.> Cc: KingingWang @.>, Author @.> Subject: Re: [juncongmoo/pyllama] Sorry,I can't run (Issue #12)

Are your model files valid?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

KingingWang avatar Mar 09 '23 09:03 KingingWang

After downloading the model, it is recommended to do an md5 check first @KingingWang

soulteary avatar Mar 09 '23 09:03 soulteary

thanks,but when I use python web_server_single.py --ckpt_dir ../../models/7B --tokenizer_path ../../models/tokenizer.model I see

(llama) -bash-4.2$ python web_server_single.py --ckpt_dir ../../models/7B --tokenizer_path ../../models/tokenizer.model
INFO:     Started server process [34473]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO:     12.12.12.202:51248 - "GET / HTTP/1.1" 404 Not Found
INFO:     12.12.12.202:51248 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO:     12.12.12.202:51248 - "GET / HTTP/1.1" 404 Not Found

KingingWang avatar Mar 09 '23 12:03 KingingWang

same problem and I dont konw why it returns 404

aiot-tech avatar Mar 09 '23 12:03 aiot-tech

I guess absolute URI is required?

shadowwalker2718 avatar Mar 09 '23 16:03 shadowwalker2718

same problem and I dont konw why it returns 404

you can try https://github.com/SWHL/LLaMADemo

SWHL avatar Mar 10 '23 01:03 SWHL