pyllama
pyllama copied to clipboard
Sorry,I can't run
(llama) -bash-4.2$ python inference.py --ckpt_dir ./models/7B --tokenizer_path ./models/tokenizer.model
Traceback (most recent call last):
File "/home/ycshu_wlxy/kingingwang/pyllama-main/inference.py", line 67, in
Are your model files valid?
invalid header or archive is corrupted model file is corrupted
I prepare to re-download the pth file
------------------ Original ------------------ From: Juncong Moo @.> Date: Thu,Mar 9,2023 4:52 PM To: juncongmoo/pyllama @.> Cc: KingingWang @.>, Author @.> Subject: Re: [juncongmoo/pyllama] Sorry,I can't run (Issue #12)
Are your model files valid?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
After downloading the model, it is recommended to do an md5 check first @KingingWang
thanks,but when I use python web_server_single.py --ckpt_dir ../../models/7B --tokenizer_path ../../models/tokenizer.model
I see
(llama) -bash-4.2$ python web_server_single.py --ckpt_dir ../../models/7B --tokenizer_path ../../models/tokenizer.model
INFO: Started server process [34473]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO: 12.12.12.202:51248 - "GET / HTTP/1.1" 404 Not Found
INFO: 12.12.12.202:51248 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 12.12.12.202:51248 - "GET / HTTP/1.1" 404 Not Found
same problem and I dont konw why it returns 404
I guess absolute URI is required?
same problem and I dont konw why it returns 404
you can try https://github.com/SWHL/LLaMADemo