Chinese-LLaMA-Alpaca icon indicating copy to clipboard operation
Chinese-LLaMA-Alpaca copied to clipboard

使用text-generation-webui运行时出错 AssertionError("Torch not compiled with CUDA enabled")

Open gensou747 opened this issue 1 year ago • 7 comments

我是使用decapoda-research/llama-7b-hf的. 在输入python server.py --model llama-7b-hf --lora chinese-alpaca-lora-7b后, 出现

raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

已经安装CUDA和pytorch 请问如何解决?

gensou747 avatar Apr 14 '23 20:04 gensou747

这应该是cuda和torch版本不一致导致的,建议通过torch.cuda.is_available()确认一下 可以通过cpu运行,如下

python server.py --model llama-7b-hf --lora chinese-alpaca-lora-7b --cpu

iMountTai avatar Apr 15 '23 06:04 iMountTai

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] avatar Apr 23 '23 00:04 github-actions[bot]

我也出现了相同的问题,首先通过对原版的LLaMA-7B模型通过手动转换的方式,转换为HF格式。 只有就按照“使用text generation webui搭建界面”的文档说明进行了操作,最后执行: python server.py --model llama-7b-hf --lora chinese-alpaca-lora-7b --cpu

项目可以顺利启动,但进行对话时,没有任何返回,并且终端报错。 raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

henryGao9 avatar May 12 '23 09:05 henryGao9

我拉取了webui最新代码并尝试运行相关命令,未复现您的问题,还请检查相关环境或到webui下提一个issue,以下是我的输出及结果

(torch1.13-cpu) [/text-generation-webui]$ python server.py --model llama-7b-hf --lora chinese-alpaca-lora-7b --cpu
INFO:Gradio HTTP request redirected to localhost :)
INFO:Loading llama-7b-hf...
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████| 2/2 [00:13<00:00,  6.74s/it]
INFO:Loaded the model in 13.70 seconds.

INFO:Applying the following LoRAs to llama-7b-hf: chinese-alpaca-lora-7b
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Output generated in 47.67 seconds (0.99 tokens/s, 47 tokens, context 51, seed 57377600)

image

iMountTai avatar May 14 '23 03:05 iMountTai

这是来自QQ邮箱的假期自动回复邮件。   @.***

henryGao9 avatar May 14 '23 03:05 henryGao9

感谢回复,问题已经解决,现在正在进行尝试

henryGao9 avatar May 15 '23 02:05 henryGao9

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] avatar May 22 '23 22:05 github-actions[bot]

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.

github-actions[bot] avatar May 25 '23 22:05 github-actions[bot]