text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Issue when starting llama after ending the configuration.

Open AdAmVitam opened this issue 1 year ago • 1 comments

Describe the bug

I had different ModuleNotFoundError before but I corrected them by installing the modules (like pytorch, transformers, ...) and now I don't know how to correct that one

_(base) C:\LLAMA\text-generation-webui>python server.py --load-in-4bit --model llama-7b-hf Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.

Loading llama-7b-hf... CUDA extension not installed. Traceback (most recent call last): File "C:\LLAMA\text-generation-webui\server.py", line 241, in shared.model, shared.tokenizer = load_model(shared.model_name) File "C:\LLAMA\text-generation-webui\modules\models.py", line 99, in load_model from modules.GPTQ_loader import load_quantized File "C:\LLAMA\text-generation-webui\modules\GPTQ_loader.py", line 12, in import llama_inference_offload File "C:\LLAMA\text-generation-webui\repositories\GPTQ-for-LLaMa\llama_inference_offload.py", line 14, in from transformers.models.llama.modeling_llama import LlamaModel,LlamaConfig ModuleNotFoundError: No module named 'transformers.models.llama'_

Is there an existing issue for this?

  • [X] I have searched the existing issues

Reproduction

Normal setup of llama

Screenshot

No response

Logs

(base) C:\LLAMA\text-generation-webui>python server.py --load-in-4bit --model llama-7b-hf
Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.

Loading llama-7b-hf...
CUDA extension not installed.
Traceback (most recent call last):
  File "C:\LLAMA\text-generation-webui\server.py", line 241, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "C:\LLAMA\text-generation-webui\modules\models.py", line 99, in load_model
    from modules.GPTQ_loader import load_quantized
  File "C:\LLAMA\text-generation-webui\modules\GPTQ_loader.py", line 12, in <module>
    import llama_inference_offload
  File "C:\LLAMA\text-generation-webui\repositories\GPTQ-for-LLaMa\llama_inference_offload.py", line 14, in <module>
    from transformers.models.llama.modeling_llama import LlamaModel,LlamaConfig
ModuleNotFoundError: No module named 'transformers.models.llama'

System Info

Intel i5 cpu, 12 Go RAM

AdAmVitam avatar Mar 22 '23 20:03 AdAmVitam

Same.

IronWolve avatar Mar 23 '23 06:03 IronWolve

Try updating transformers

pip uninstall transformers
pip install git+https://github.com/huggingface/transformers

oobabooga avatar Mar 29 '23 02:03 oobabooga

Didn't work for me

SatoshiReport avatar Apr 21 '23 16:04 SatoshiReport