Linly
Linly copied to clipboard
运行脚本generate_chatllama.py后,tokenizer报错
Traceback (most recent call last):
File "scripts/generate_chatllama.py", line 82, in
我运行脚本后报错了,请问这个问题有谁遇到过嘛
我也是求教
同样出错
subscribe this issue as meet the same issue
同样问题,怎么解决
spm_model_file = '../ChatLLaMA-zh-7B/tokenizer.model'这个分词模型是不是损坏了?
同样出错
spm_model_file = '../ChatLLaMA-zh-7B/tokenizer.model'这个分词模型是不是损坏了?
我测试了没有遇到这个问题,检查一下Sentencepiece版本? 我这里是0.1.97
spm_model_file = '../ChatLLaMA-zh-7B/tokenizer.model'这个分词模型是不是损坏了?
我测试了没有遇到这个问题,检查一下Sentencepiece版本? 我这里是0.1.97
我这边Sentencepiece版本也是0.1.97,刚试了还是报错: File "/opt/conda/lib/python3.10/site-packages/sentencepiece/init.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
spm_model_file = '../ChatLLaMA-zh-7B/tokenizer.model'这个分词模型是不是损坏了?
我测试了没有遇到这个问题,检查一下Sentencepiece版本? 我这里是0.1.97
我这边Sentencepiece版本也是0.1.97,刚试了还是报错: File "/opt/conda/lib/python3.10/site-packages/sentencepiece/init.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
已解决,重新下载模型权重文件。git clone时要安装git lfs
spm_model_file = '../ChatLLaMA-zh-7B/tokenizer.model'这个分词模型是不是损坏了?
我测试了没有遇到这个问题,检查一下Sentencepiece版本? 我这里是0.1.97
我这边Sentencepiece版本也是0.1.97,刚试了还是报错: File "/opt/conda/lib/python3.10/site-packages/sentencepiece/init.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
已解决,重新下载模型权重文件。git clone时要安装git lfs
安装之后下载模型权重文件速度太慢了,有什么好方法吗?