llama.cpp
llama.cpp copied to clipboard
convert-pth-to-ggml.py error with "Got unsupported ScalarType BFloat16"
Trying to convert "chavinlo/alpaca-native" alpaca native model's (https://huggingface.co/chavinlo/alpaca-native) weights to ggml but got this error -
Processing part 0
Processing variable: model.embed_tokens.weight with shape: torch.Size([32001, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.q_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.k_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.v_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.o_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32
Processing variable: model.layers.0.self_attn.rotary_emb.inv_freq with shape: torch.Size([64]) and type: torch.bfloat16
Traceback (most recent call last):
File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 157, in
Please review and use our issue template to provide more details so we can better understand your problem and answer you.
The scripts have been rewritten from scratch. Try the new convert.py script from master.