llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

KeyError: 'transformer.wte.weight'

Open h9-tect opened this issue 1 year ago • 12 comments

Hello I having this issue while converting the model

!python llama.cpp/convert.py jais-13b \
  --outfile jais-13b.gguf \
  --outtype q8_0
Loading model file jais-13b/pytorch_model-00001-of-00006.bin
Loading model file jais-13b/pytorch_model-00001-of-00006.bin
Loading model file jais-13b/pytorch_model-00002-of-00006.bin
Loading model file jais-13b/pytorch_model-00003-of-00006.bin
Loading model file jais-13b/pytorch_model-00004-of-00006.bin
Loading model file jais-13b/pytorch_model-00005-of-00006.bin
Loading model file jais-13b/pytorch_model-00006-of-00006.bin
Traceback (most recent call last):
  File "/content/llama.cpp/convert.py", line 1279, in <module>
    main()
  File "/content/llama.cpp/convert.py", line 1207, in main
    model_plus = load_some_model(args.model)
  File "/content/llama.cpp/convert.py", line 1142, in load_some_model
    model_plus = merge_multifile_models(models_plus)
  File "/content/llama.cpp/convert.py", line 635, in merge_multifile_models
    model = merge_sharded([mp.model for mp in models_plus])
  File "/content/llama.cpp/convert.py", line 614, in merge_sharded
    return {name: convert(name) for name in names}
  File "/content/llama.cpp/convert.py", line 614, in <dictcomp>
    return {name: convert(name) for name in names}
  File "/content/llama.cpp/convert.py", line 589, in convert
    lazy_tensors: list[LazyTensor] = [model[name] for model in models]
  File "/content/llama.cpp/convert.py", line 589, in <listcomp>
    lazy_tensors: list[LazyTensor] = [model[name] for model in models]
KeyError: 'transformer.wte.weight'

h9-tect avatar Dec 19 '23 08:12 h9-tect

@h9-tect Did you figure it out?

dspasyuk avatar Dec 23 '23 22:12 dspasyuk

@dspasyuk Not yet

h9-tect avatar Dec 25 '23 14:12 h9-tect

Have you tried using convert-hf-to-gguf.py?

jadechip avatar Dec 26 '23 09:12 jadechip

@jadechip yeah, didn't work

h9-tect avatar Dec 28 '23 07:12 h9-tect

got same problem on starcoder 15B.

LaniakeaS avatar Dec 29 '23 07:12 LaniakeaS

@h9-tect any ubdates ؟

dz28b avatar Jan 08 '24 13:01 dz28b

Nah

h9-tect avatar Jan 10 '24 08:01 h9-tect

Have you tried using convert-hf-to-gguf.py?

The same problem is encountered with lama.cpp/convert.py, but convert-hf-to-gguf.py works. Model Qwen-72B-Chat.

gswsqffsapd3056 avatar Jan 18 '24 02:01 gswsqffsapd3056

interesting update... I've tried convert-hf-to-gguf.py to convert starchat-beta, got following result.

Traceback (most recent call last):
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 1173, in <module>
    model_instance.write()
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 136, in write
    self.write_tensors()
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 97, in write_tensors
    for name, data_torch in self.get_tensors():
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 62, in get_tensors
    ctx = contextlib.nullcontext(torch.load(str(self.dir_model / part_name), map_location="cpu", weights_only=True))
  File "/home/guest/miniconda3/envs/code_model/lib/python3.10/site-packages/torch/serialization.py", line 791, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/guest/miniconda3/envs/code_model/lib/python3.10/site-packages/torch/serialization.py", line 271, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/guest/miniconda3/envs/code_model/lib/python3.10/site-packages/torch/serialization.py", line 252, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'models/starchat-beta/pytorch_model-00001-of-00005.bin'

But the weight files contain 4 files instead of 5. Don't know where that came from...

added_tokens.json	handler.py				   pytorch_model-00004-of-00004.bin  trainer_state.json
all_results.json	merges.txt				   pytorch_model.bin.index.json      training_args.bin
config.json		model-00001-of-00004.safetensors.download  README.md			     train_results.json
dialogue_template.json	model_logo.png				   requirements.txt		     vocab.json
eval_results.json	pytorch_model-00001-of-00004.bin	   special_tokens_map.json
generation_config.json	pytorch_model-00002-of-00004.bin	   tokenizer_config.json
ggml-model-f16.gguf	pytorch_model-00003-of-00004.bin	   tokenizer.json

Btw, starcoder works fine under convert-hf-to-gguf.py

LaniakeaS avatar Jan 18 '24 09:01 LaniakeaS

This is because you have another file with "bin" type: training_args.bin . The way the converf-hf-to-gguf.py to count the number of weight files is simply to count the number of "*.bin". Just change the suffix of traning_args.bin will solve the problem.

interesting update... I've tried convert-hf-to-gguf.py to convert starchat-beta, got following result.

Traceback (most recent call last):
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 1173, in <module>
    model_instance.write()
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 136, in write
    self.write_tensors()
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 97, in write_tensors
    for name, data_torch in self.get_tensors():
  File "/home/guest/**/llama.cpp/convert-hf-to-gguf.py", line 62, in get_tensors
    ctx = contextlib.nullcontext(torch.load(str(self.dir_model / part_name), map_location="cpu", weights_only=True))
  File "/home/guest/miniconda3/envs/code_model/lib/python3.10/site-packages/torch/serialization.py", line 791, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/guest/miniconda3/envs/code_model/lib/python3.10/site-packages/torch/serialization.py", line 271, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/guest/miniconda3/envs/code_model/lib/python3.10/site-packages/torch/serialization.py", line 252, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'models/starchat-beta/pytorch_model-00001-of-00005.bin'

But the weight files contain 4 files instead of 5. Don't know where that came from...

added_tokens.json	handler.py				   pytorch_model-00004-of-00004.bin  trainer_state.json
all_results.json	merges.txt				   pytorch_model.bin.index.json      training_args.bin
config.json		model-00001-of-00004.safetensors.download  README.md			     train_results.json
dialogue_template.json	model_logo.png				   requirements.txt		     vocab.json
eval_results.json	pytorch_model-00001-of-00004.bin	   special_tokens_map.json
generation_config.json	pytorch_model-00002-of-00004.bin	   tokenizer_config.json
ggml-model-f16.gguf	pytorch_model-00003-of-00004.bin	   tokenizer.json

Btw, starcoder works fine under convert-hf-to-gguf.py

wanbo432503 avatar Feb 20 '24 02:02 wanbo432503

ah, you are right. It has been solved. But it's still a little bit weird to just consider the suffix instead the whole file name, right? Does this mean this is a bug that needs to be fixed?

LaniakeaS avatar Feb 20 '24 03:02 LaniakeaS

Hi, I am encountering the same error as OP. Changing conversion command to python llama.cpp/convert-hf-to-gguf.py mpt-7b-storywriter --outfile mpt-7b-storywriter.gguf results in the following error:

Traceback (most recent call last):
  File "/Users/namehta4/Documents/Laptop_Neil/Research/Consulting/ML_tutorial/LLM/llama.cpp/convert-hf-to-gguf.py", line 1876, in <module>
    main()
  File "/Users/namehta4/Documents/Laptop_Neil/Research/Consulting/ML_tutorial/LLM/llama.cpp/convert-hf-to-gguf.py", line 1863, in main
    model_instance.set_vocab()
  File "/Users/namehta4/Documents/Laptop_Neil/Research/Consulting/ML_tutorial/LLM/llama.cpp/convert-hf-to-gguf.py", line 63, in set_vocab
    self._set_vocab_gpt2()
  File "/Users/namehta4/Documents/Laptop_Neil/Research/Consulting/ML_tutorial/LLM/llama.cpp/convert-hf-to-gguf.py", line 304, in _set_vocab_gpt2
    if tokenizer.added_tokens_decoder[i].special:
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'GPTNeoXTokenizerFast' object has no attribute 'added_tokens_decoder'

Thank you! Neil

namehta4 avatar Feb 20 '24 21:02 namehta4

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 06 '24 01:04 github-actions[bot]

Hi there. Is there any update for this issue? I am using JAIS model and meeting the same error.

lipingtang17 avatar May 03 '24 07:05 lipingtang17