LLM-TPU icon indicating copy to clipboard operation
LLM-TPU copied to clipboard

运行模型出错

Open wzf2020 opened this issue 1 year ago • 2 comments

场景:调用./run.sh --model llama2-7b --arch soc 时候出现两个错误 错误1: 报错提示没有chat.cpython-310-x86_64-linux-gnu.so这个库。 修改:板子非x86的,应该是aarch64 的库,根据自己板子实际编译结果修改run_demo.sh脚本中的库。 建议:建议修改下脚本run_demo.sh或者在常见问题中给出一个说明,让用户根据实际库名称修改

错误2:报一下错误 Traceback (most recent call last): File "python_demo/pipeline.py", line 216, in main(args) File "python_demo/pipeline.py", line 197, in main model = Llama2(args) File "python_demo/pipeline.py", line 14, in init self.tokenizer = AutoTokenizer.from_pretrained( File "/home/linaro/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 676, in from_pretrained raise ValueError( ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported. 解决:通过运行pip3 install transformers --upgrade来更新Transformers库。 建议:在常见问题中说明,给用户提示下

wzf2020 avatar Jun 07 '24 01:06 wzf2020

你好,我改成了aarch64但是还不能编译出chat.gnu.so库,这是咋回事 2222

ctrlcplusv avatar Jun 20 '24 06:06 ctrlcplusv

很奇怪的问题,有点像是链接错误,是lib_soc没有下载好么

参考这里的下载libsophon的方式 https://github.com/sophgo/LLM-TPU/tree/main/models/Qwen/demo_parallel

之后export LD_LIBRARY_PATH=xxx 试试呢

chuxiaoyi2023 avatar Jul 03 '24 09:07 chuxiaoyi2023