peft icon indicating copy to clipboard operation
peft copied to clipboard

AttributeError: 'NoneType' object has no attribute 'device'

Open imrankh46 opened this issue 1 year ago • 4 comments

why this happening?

batch = tokenizer("Two things are infinite: ", return_tensors="pt")

with torch.cuda.amp.autocast():
    output_tokens = model.generate(**batch, max_new_tokens=50)

print("\n\n", tokenizer.decode(output_tokens[0], skip_special_tokens=True))

its give the following error

AttributeError: 'NoneType' object has no attribute 'device'

imrankh46 avatar Mar 18 '23 11:03 imrankh46

Hi @imrankh46 Thanks for the issue! We are aware of the issue, for now the solution is to pass device_map={"":0} when calling PeftModel.from_pretrained, we will work on a proper fix soon The issue is due to a call of dispatch_model inside from_pretrained of PeftModel, that breaks few things with Linear8bitLt layers

younesbelkada avatar Mar 20 '23 08:03 younesbelkada

Hi @imrankh46 Thanks for the issue! We are aware of the issue, for now the solution is to pass device_map={"":0} when calling PeftModel.from_pretrained, we will work on a proper fix soon The issue is due to a call of dispatch_model inside from_pretrained of PeftModel, that breaks few things with Linear8bitLt layers

Thanks for response.

imrankh46 avatar Mar 20 '23 08:03 imrankh46

@imrankh46 I believe https://github.com/huggingface/accelerate/pull/1237 should have fixed your issue, can you try to download accelerate from source and let us know if you still face the issue?

pip install git+https://github.com/huggingface/accelerate

younesbelkada avatar Apr 01 '23 11:04 younesbelkada

Hi @younesbelkada , I have tried to install accelerate from source, but I got another Error:

NotImplementedError: Cannot copy out of meta tensor; no data!

Do you know the possible reason for this? Thanks!

HZQ950419 avatar Apr 15 '23 10:04 HZQ950419

Hi @imrankh46 Thanks for the issue! We are aware of the issue, for now the solution is to pass device_map={"":0} when calling PeftModel.from_pretrained, we will work on a proper fix soon The issue is due to a call of dispatch_model inside from_pretrained of PeftModel, that breaks few things with Linear8bitLt layers

I have encountered the same problem, my version is peft=0.2.0. I wonder if you have resolved this issue?

YSLLYW avatar May 03 '23 12:05 YSLLYW

@YSLLYW Can you try by installing accelerate from source?

pip install git+https://github.com/huggingface/accelerate

younesbelkada avatar May 03 '23 12:05 younesbelkada

Hi @imrankh46 Thanks for the issue! We are aware of the issue, for now the solution is to pass device_map={"":0} when calling PeftModel.from_pretrained, we will work on a proper fix soon The issue is due to a call of dispatch_model inside from_pretrained of PeftModel, that breaks few things with Linear8bitLt layers

I have encountered the same problem, my version is peft=0.2.0. I wonder if you have resolved this issue?

You need to pass the device_map parameters like this device_map={"":0}

The complete code are here. You just need to pass your model


import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import LlamaTokenizer
# pass your model name 
peft_model_id = 'your_model_name'
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = LlamaTokenizer.from_pretrained(config.base_model_name_or_path)

# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, torch_dtype=torch.float16,device_map={"":0})

imrankh46 avatar May 03 '23 12:05 imrankh46

@YSLLYW Can you try by installing accelerate from source?

pip install git+https://github.com/huggingface/accelerate

I already solved the issues. Thanks

imrankh46 avatar May 03 '23 12:05 imrankh46

@YSLLYW Can you try by installing accelerate from source?

pip install git+https://github.com/huggingface/accelerate

Yes, I just updated the PEFT version to 0.3.0 and resolved this issue. Thank you for your reply

YSLLYW avatar May 03 '23 13:05 YSLLYW

@YSLLYW Can you try by installing accelerate from source?

pip install git+https://github.com/huggingface/accelerate

I already solved the issues. Thanks

Yes, I just updated the PEFT version to 0.3.0 and resolved this issue. Thank you for your reply

YSLLYW avatar May 03 '23 13:05 YSLLYW

@YSLLYW Can you try by installing accelerate from source?

pip install git+https://github.com/huggingface/accelerate

I already solved the issues. Thanks

Yes, I just updated the PEFT version to 0.3.0 and resolved this issue. Thank you for your reply

Please also add Contrastive Search for text generation. When I pass penalty_alpha=0.6 they give me error cuda out memory.

imrankh46 avatar May 03 '23 13:05 imrankh46

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

github-actions[bot] avatar May 27 '23 15:05 github-actions[bot]