llama
llama copied to clipboard
params.json: FAILED
Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues
Describe the bug
After downloading all of the model parts for 70b-Instruct and 70b-Python I am getting the following error
consolidated.01.pth: OK consolidated.02.pth: OK consolidated.03.pth: OK consolidated.04.pth: OK consolidated.05.pth: OK consolidated.06.pth: OK consolidated.07.pth: OK params.json: FAILED tokenizer.model: OK md5sum: WARNING: 1 line is improperly formatted md5sum: WARNING: 1 computed checksum did NOT match
Minimal reproducible example
This is the contents of params.json
{
"dim": 8192,
"n_heads": 64,
"n_kv_heads": 8,
"n_layers": 80,
"multiple_of": 4096,
"ffn_dim_multiplier": 1.3,
"norm_eps": 1e-5,
"rope_theta": 10000
}
Output
consolidated.01.pth: OK consolidated.02.pth: OK consolidated.03.pth: OK consolidated.04.pth: OK consolidated.05.pth: OK consolidated.06.pth: OK consolidated.07.pth: OK params.json: FAILED tokenizer.model: OK md5sum: WARNING: 1 line is improperly formatted md5sum: WARNING: 1 computed checksum did NOT match
Runtime Environment
- Model: [CodeLlama-70b-Instruct]
- Using via huggingface?: [no]
- OS: [Windows]
- GPU VRAM: 24gb
- Number of GPUs: 1
- GPU Make: [Nvidia]
Additional context How does params.json fail? It exists.
hey @aaronjolson, we had an issue with the checklist.chk we uploaded for the Python and Instruct models. This should be fixed now, can you try and download the models again?
Please re-open if you need help.