llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

convert-pth-to-ggml.py error with "Got unsupported ScalarType BFloat16"

Open austinchau opened this issue 1 year ago • 1 comments

Trying to convert "chavinlo/alpaca-native" alpaca native model's (https://huggingface.co/chavinlo/alpaca-native) weights to ggml but got this error -

Processing part 0

Processing variable: model.embed_tokens.weight with shape: torch.Size([32001, 4096]) and type: torch.float32 Processing variable: model.layers.0.self_attn.q_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32 Processing variable: model.layers.0.self_attn.k_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32 Processing variable: model.layers.0.self_attn.v_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32 Processing variable: model.layers.0.self_attn.o_proj.weight with shape: torch.Size([4096, 4096]) and type: torch.float32 Processing variable: model.layers.0.self_attn.rotary_emb.inv_freq with shape: torch.Size([64]) and type: torch.bfloat16 Traceback (most recent call last): File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 157, in main() File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 151, in main process_and_write_variables(fout, model, ftype) File "/Users/domeie/projects/llama.cpp/convert-pth-to-ggml.py", line 109, in process_and_write_variables data = datao.numpy().squeeze() TypeError: Got unsupported ScalarType BFloat16

austinchau avatar Mar 22 '23 16:03 austinchau

Please review and use our issue template to provide more details so we can better understand your problem and answer you.

gjmulder avatar Mar 22 '23 17:03 gjmulder

The scripts have been rewritten from scratch. Try the new convert.py script from master.

prusnak avatar Apr 16 '23 09:04 prusnak