llama-go
llama-go copied to clipboard
[User] I encountered a problem ..
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [x] I carefully followed the README.md.
- [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [x] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
Trying to run the model converter tool using
{'dim': 4096, 'multiple_of': 256, 'n_heads': 32, 'n_layers': 32, 'norm_eps': 1e-05, 'vocab_size': -1}
Namespace(dir_model='../llama.cpp/models/llama-2-7b/', ftype=1, vocab_only=0)
n_parts = 1
Processing part 0
Processing variable: tok_embeddings.weight with shape: torch.Size([32000, 4096]) and type: torch.bfloat16
Traceback (most recent call last):
File "/Users/vivekv/software/llama-go/./convert-pth-to-ggml.py", line 181, in <module>
main()
File "/Users/vivekv/software/llama-go/./convert-pth-to-ggml.py", line 174, in main
process_and_write_variables(fout, model, ftype)
File "/Users/vivekv/software/llama-go/./convert-pth-to-ggml.py", line 109, in process_and_write_variables
data = datao.numpy().squeeze()
^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType BFloat16
- Physical (or virtual) hardware you are using, e.g. for Linux: running on an M1 Mac Pro
$ python3 --version
Python 3.11.4
$ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
$ g++ --version
Apple clang version 14.0.3 (clang-1403.0.22.14.1)
Target: arm64-apple-darwin22.4.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin