GPTQ-for-LLaMa icon indicating copy to clipboard operation
GPTQ-for-LLaMa copied to clipboard

llama_inference RuntimeError: Internal: src/sentencepiece_processor.cc

Open youkpan opened this issue 2 years ago • 0 comments

python llama_inference.py ./llama-7b-hf --wbits 4 --load ./llama-7b-4bit.pt --text "this is llama"

Loading model ... Done.

Traceback (most recent call last): File "/root/GPTQ-for-LLaMa/llama_inference.py", line 114, in tokenizer = AutoTokenizer.from_pretrained(args.model) File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 679, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py", line 72, in init self.sp_model.Load(vocab_file) File "/opt/conda/lib/python3.10/site-packages/sentencepiece/init.py", line 905, in Load return self.LoadFromFile(model_file) File "/opt/conda/lib/python3.10/site-packages/sentencepiece/init.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]

youkpan avatar Mar 15 '23 05:03 youkpan