llama.cpp
llama.cpp copied to clipboard
Misc. bug: convert_hf_to_gguf.py: ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight_scale'
Name and Version
version: 4410 (4b0c638b) built with cc (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3) for x86_64-redhat-linux
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Python/Bash scripts
Command line
python ./convert_hf_to_gguf.py ~/downloads/Falcon3-10B-Instruct-1.58bit/
Problem description & steps to reproduce
Trying to convert tiiuae/Falcon3-10B-Instruct-1.58bit to GGUF results in the error.
First Bad Commit
No response
Relevant log output
INFO:hf-to-gguf:Loading model: Falcon3-10B-Instruct-1.58bit
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model part 'model.safetensors'
INFO:hf-to-gguf:output.weight, torch.bfloat16 --> F16, shape = {3072, 131072}
INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {3072, 131072}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {3072}
INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.uint8 --> F16, shape = {23040, 768}
Traceback (most recent call last):
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 4929, in <module>
main()
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 4923, in main
model_instance.write()
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 438, in write
self.prepare_tensors()
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 1692, in prepare_tensors
super().prepare_tensors()
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 298, in prepare_tensors
for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 1660, in modify_tensors
return [(self.map_tensor_name(name), data_torch)]
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/src/llama.cpp/./convert_hf_to_gguf.py", line 214, in map_tensor_name
raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight_scale'