text-generation-webui
text-generation-webui copied to clipboard
Issue when starting llama after ending the configuration.
Describe the bug
I had different ModuleNotFoundError before but I corrected them by installing the modules (like pytorch, transformers, ...) and now I don't know how to correct that one
_(base) C:\LLAMA\text-generation-webui>python server.py --load-in-4bit --model llama-7b-hf Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.
Loading llama-7b-hf...
CUDA extension not installed.
Traceback (most recent call last):
File "C:\LLAMA\text-generation-webui\server.py", line 241, in
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
Normal setup of llama
Screenshot
No response
Logs
(base) C:\LLAMA\text-generation-webui>python server.py --load-in-4bit --model llama-7b-hf
Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.
Loading llama-7b-hf...
CUDA extension not installed.
Traceback (most recent call last):
File "C:\LLAMA\text-generation-webui\server.py", line 241, in <module>
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\LLAMA\text-generation-webui\modules\models.py", line 99, in load_model
from modules.GPTQ_loader import load_quantized
File "C:\LLAMA\text-generation-webui\modules\GPTQ_loader.py", line 12, in <module>
import llama_inference_offload
File "C:\LLAMA\text-generation-webui\repositories\GPTQ-for-LLaMa\llama_inference_offload.py", line 14, in <module>
from transformers.models.llama.modeling_llama import LlamaModel,LlamaConfig
ModuleNotFoundError: No module named 'transformers.models.llama'
System Info
Intel i5 cpu, 12 Go RAM
Same.
Try updating transformers
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers
Didn't work for me