qlora icon indicating copy to clipboard operation
qlora copied to clipboard

lora weights are not saved correctly

Open taishan1994 opened this issue 2 years ago • 10 comments

The saved adapter_model.bin is only 441kb.

https://github.com/artidoro/qlora/issues/38

taishan1994 avatar May 26 '23 13:05 taishan1994

check your peft version and try: pip uninstall peft -y pip install git+https://github.com/huggingface/peft.git@e536616888d51b453ed354a6f1e243fecb02ea08

s1ghhh avatar May 26 '23 13:05 s1ghhh

pip install git+https://github.com/huggingface/peft.git@e536616888d51b453ed354a6f1e243fecb02ea08

its peft version is 0.3.0.dev.0. it doesn't work. peft version must be peft==0.4.0.dev0, but the above-mentioned problems have arisen.

taishan1994 avatar May 26 '23 13:05 taishan1994

Yes @taishan1994 is correct, we can't use older version of peft since it doesn't have the qlora changes.

KKcorps avatar May 26 '23 15:05 KKcorps

@taishan1994 can you try the changes in the PR and see if it works for you

KKcorps avatar May 26 '23 16:05 KKcorps

@KKcorps that works for me

anyili avatar May 26 '23 21:05 anyili

@taishan1994 can you try the changes in the PR and see if it works for you

The model is saved as 112M, and loading the model for inference reports the following error: ValueError: Cannot merge LORA layers when the model is loaded in 8-bit mode, like https://github.com/artidoro/qlora/issues/16

taishan1994 avatar May 27 '23 04:05 taishan1994

for merge just comment out the merge_and_unload line

it is not mandatory to merge to do inference the combined peft model object can be used as i

KKcorps avatar May 27 '23 04:05 KKcorps

for merge just comment out the merge_and_unload line

it is not mandatory to merge to do inference the combined peft model object can be used as i

it can be solved, but new problem comes: https://github.com/artidoro/qlora/issues/12

and just do: # model = model.cuda().eval() <- DO NOT ADD THIS

taishan1994 avatar May 27 '23 05:05 taishan1994

To summarize all the issues:

  • lora weights are not saved correctly : Comment out the following code
# if args.bits < 16:
#     old_state_dict = model.state_dict
#     model.state_dict = (
#         lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
#     ).__get__(model, type(model))
  • RuntimeError: self and mat2 must have the same dtype:peft version must be 0.4.0.dev0 pip install -U git+https://github.com/huggingface/peft.git
  • ValueError: Cannot merge LORA layers when the model is loaded in 8-bit mode:just don't use model = model.merge_and_unload()
  • RuntimeError: mat1 and mat2 shapes cannot be multiplied (44x6656 and 1x22151168):when you use as
model = AutoModel.from_pretrained(args["model_dir"],  
                                  trust_remote_code=True,
                                  load_in_4bit=True,
                                  device_map={"":0})
model = PeftModel.from_pretrained(model, args["save_dir"], trust_remote_code=True)
model.cuda().eval()  <- DO NOT ADD THIS

taishan1994 avatar May 27 '23 06:05 taishan1994

@taishan1994,

Based on the information you provided, it seems that the size of the saved adapter_model.bin file is only 441 KB, indicating that the weights might not be properly saved. To address this issue, we recommend the following solution:

  1. Ensure that you have the latest version of the peft library installed. You can do this by running the following command:
pip install -U git+https://github.com/huggingface/peft.git

Updating to version 0.4.0.dev0 of peft may help resolve the problem.

  1. Comment out the following code in your script:
# if args.bits < 16:
#     old_state_dict = model.state_dict
#     model.state_dict = (
#         lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
#     ).__get__(model, type(model))

By commenting out this code block, you can prevent potential conflicts and ensure that the LORA weights are saved correctly.

hemangjoshi37a avatar May 28 '23 08:05 hemangjoshi37a