text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Error when loading custom finetuned lora: AttributeError: 'NoneType' object has no attribute 'device'

Open 0xbitches opened this issue 1 year ago • 3 comments

Describe the bug

This is the same bug as https://github.com/tloen/alpaca-lora/issues/14.

Here's the fix I used to workaround this issue: https://github.com/tloen/alpaca-lora/issues/14#issuecomment-1471263165

See suggested fix below.

Is there an existing issue for this?

  • [X] I have searched the existing issues

Reproduction

Use a custom trained lora (such as llama-13b with alpaca dataset), load it with webui, got the error.

Screenshot

No response

Logs

File "C:\Users\jason\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\functional.py", line 1698, in transform
    prev_device = pre_call(A.device)
AttributeError: 'NoneType' object has no attribute 'device'

System Info

Windows 11 Pro. RTX 4090.

0xbitches avatar Mar 17 '23 19:03 0xbitches

Suggested fix for people encountering this bug:

Add device_map={'': 0} to line 18 of modules/LoRA.py, inside PeftModel.from_pretrained.

This forces Peft to load the model with GPU, and should work with most people with single GPUs. For users with multiple GPUs, feel free to edit device_map to your liking.

0xbitches avatar Mar 17 '23 19:03 0xbitches

So to clarify, the changes I had to apply was in generate.py:

model = PeftModel.from_pretrained(
        model, "tloen/alpaca-lora-7b",
        torch_dtype=torch.float16
    )

change this to:

    model = PeftModel.from_pretrained(
        model, "tloen/alpaca-lora-7b",
        torch_dtype=torch.float16,
        device_map={'': 0}
    )

ThatCoffeeGuy avatar Mar 18 '23 12:03 ThatCoffeeGuy

So to clarify, the changes I had to apply was in generate.py:

model = PeftModel.from_pretrained(
        model, "tloen/alpaca-lora-7b",
        torch_dtype=torch.float16
    )

change this to:

    model = PeftModel.from_pretrained(
        model, "tloen/alpaca-lora-7b",
        torch_dtype=torch.float16,
        device_map={'': 0}
    )

Weird, I don't have these lines in my generate.py in venv

oliverban avatar Mar 18 '23 23:03 oliverban

I think that this has been fixed with all the recent updates to LoRA.py

oobabooga avatar Mar 29 '23 03:03 oobabooga

@oobabooga sorry for the bad question, but how do we get these updates? Not sure which library to pip install.

pGit1 avatar Mar 31 '23 13:03 pGit1

Just git pull inside the text-generation-webui folder

oobabooga avatar Mar 31 '23 13:03 oobabooga

Sorry. I dont actually have this repo installed. From my research looks like the latest iteration of PEFT needs to be pulled down. Thanks for your help! I am going to get this repo once I am familiar enough with the ins and outs of this stuff.

pGit1 avatar Mar 31 '23 13:03 pGit1