llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

AttributeError: 'GGUFWriter' object has no attribute 'add_vocab_size'

Open HelloNicoo opened this issue 2 months ago • 6 comments

Hi, When I converted the large model weights to gguf format, I encountered this error

HelloNicoo avatar Apr 10 '24 10:04 HelloNicoo

Hello, which model architecture? Please share the steps to reproduce your issue

phymbert avatar Apr 10 '24 10:04 phymbert

Hello, which model architecture? Please share the steps to reproduce your issue The model that I used is: yi-34B-200k I executed this command: python3 convert.py --outtype f16 /mnt/llm/01ai/Yi-34B-Chat-stable/

HelloNicoo avatar Apr 11 '24 01:04 HelloNicoo

I get this same error when trying convert.py on a model in format pth, attempting to convert to ggml. I am using the command:

$ python3 convert.py llama-2-13b --outfile ~/scratch/models/llama-2-13b.ggml.bin

and at the end of my output, I get (edited to remove full path names):

skipping tensor rope_freqs
Writing llama-2-13b.ggml.bin, format 0
Ignoring added_tokens.json since model matches vocab size without it.
gguf: This GGUF file is for Little Endian only
Traceback (most recent call last):
  File "cuda_llama.cpp/b2684/bin/convert.py", line 1548, in <module>
    main()
  File "convert.py", line 1542, in main
    OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab,
  File "convert.py", line 1212, in write_all
    of.add_meta_arch(params)
  File "convert.py", line 1066, in add_meta_arch
    self.gguf.add_vocab_size          (params.n_vocab)
AttributeError: 'GGUFWriter' object has no attribute 'add_vocab_size'

This is running the latest version of llama.cpp - b2684. I also found the same output with version b2619. I updated to latest and it persists.

Thanks, Shelly

jshelly avatar Apr 16 '24 21:04 jshelly

do not pip install gguf. use gguf in <src_root>/gguf-py see <src_root>/gguf-py/README.MD

zhujianf avatar Apr 23 '24 11:04 zhujianf

Hello, I already had pip installed gguf, and it still gives me this error. My OS is RedHat 7.9. I created a python (version 3.11) venv and that is where I pip installed all of the packages according to the requirements.txt file in <src_root>, which includes gguf. I have confirmed that I am able to import gguf into python. Do you have any other suggestions?

jshelly avatar Apr 23 '24 15:04 jshelly

Sorry, I didn't read your message properly. I uninstalled my version of gguf and re-installed it with the one in <src_root>/gguf-py, and it seems to be working now. Thank you very much!

jshelly avatar Apr 23 '24 15:04 jshelly