qlora icon indicating copy to clipboard operation
qlora copied to clipboard

Cannot merge LORA layers when the model is loaded in 8-bit mode

Open yangjianxin1 opened this issue 2 years ago • 27 comments

When I load the model as following, throw the error: Cannot merge LORA layers when the model is loaded in 8-bit mode How can I load model with 4bit when inferencing? model_path = 'decapoda-research/llama-30b-hf' adapter_path = 'timdettmers/guanaco-33b' quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), model = AutoModelForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, load_in_4bit=True, quantization_config=quantization_config, torch_dtype=torch.float16, device_map='auto' ) model = PeftModel.from_pretrained(model, adapter_path) model = model.merge_and_unload()

yangjianxin1 avatar May 25 '23 16:05 yangjianxin1

model_path = 'decapoda-research/llama-30b-hf' 
adapter_path = 'timdettmers/guanaco-33b'
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type='nf4'
),
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    low_cpu_mem_usage=True,
    load_in_4bit=True,
    quantization_config=quantization_config,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(model, adapter_path)
model = model.merge_and_unload()

yangjianxin1 avatar May 25 '23 16:05 yangjianxin1

Just fyi decapoda-research is extremely out of date. Please use huggyllama instead.

USBhost avatar May 25 '23 17:05 USBhost

did you solve this? and its the same result with huggyllama

bodaay avatar May 25 '23 19:05 bodaay

@bodaay what is the sie of the adapter.bin you are getting? Mine is like in bytes.

Btw, I just commented out the model = model.merge_and_unload() line and it works. Merging is not necessary

KKcorps avatar May 25 '23 20:05 KKcorps

remove "model = model.merge_and_unload()", and it works.

yangjianxin1 avatar May 26 '23 02:05 yangjianxin1

@bodaay what is the sie of the adapter.bin you are getting? Mine is like in bytes.

Btw, I just commented out the model = model.merge_and_unload() line and it works. Merging is not necessary

when I re-save the model, its proper 3.2GB

bodaay avatar May 26 '23 07:05 bodaay

@yangjianxin1, Based on the code snippet you provided, it seems that you are loading the model using AutoModelForCausalLM with 4-bit quantization enabled. However, when attempting to merge the LORA layers using merge_and_unload(), the mentioned error occurs.

To address this issue, we recommend the following approach:

  1. Remove the line model = model.merge_and_unload() from your code. The merging step is not necessary for inference and can be omitted.

By removing the merge_and_unload() line, you should be able to successfully load and use the model for inference without encountering the "Cannot merge LORA layers when the model is loaded in 8-bit mode" error.

hemangjoshi37a avatar May 28 '23 08:05 hemangjoshi37a

@yangjianxin1, Based on the code snippet you provided, it seems that you are loading the model using AutoModelForCausalLM with 4-bit quantization enabled. However, when attempting to merge the LORA layers using merge_and_unload(), the mentioned error occurs.

To address this issue, we recommend the following approach:

  1. Remove the line model = model.merge_and_unload() from your code. The merging step is not necessary for inference and can be omitted.

By removing the merge_and_unload() line, you should be able to successfully load and use the model for inference without encountering the "Cannot merge LORA layers when the model is loaded in 8-bit mode" error.

How can i optain one model so i can use it in llama.cpp if i cant merge them?

Do you have any idea how to make the fine-tuned model applicable inside llama.cpp ?

Any help is highly appreciated

larawehbe avatar May 31 '23 13:05 larawehbe

Removing merge_and_unload() is not the solution!!. Of course, the inference works fine, there is no attempt to merge your LORA weights into the base model. What is the real solution here?

KiranNadig62 avatar Jun 19 '23 16:06 KiranNadig62

You can see a workaround here: https://github.com/substratusai/model-falcon-7b-instruct/blob/430cf5dfda02c0359122d4ef7f9b6d0c01bb3b39/src/train.ipynb

Effectively I reload the base model in 16 bit to work around the issue. It works fine for my use case.

samos123 avatar Jul 06 '23 18:07 samos123

Anyone found a way to solve this without loading the model in 16bit? My GPU cannot load whole Falcon-40B (in 16-bit), using device_map=auto and offload_folder causes the python code to be Killed. Loading the base model in 4 bit mode and merging with LORA adapters still fail with "Cannot merge LORA layers in 8-bit mode"

ashmitbhattarai avatar Jul 11 '23 07:07 ashmitbhattarai

I hope in the future this code could work... its more natural.

model = peft_model.merge_and_unload()
model.save_pretrained("/model/trained")

jocastrocUnal avatar Jul 22 '23 03:07 jocastrocUnal

Any updates on this?

MrigankRaman avatar Aug 02 '23 13:08 MrigankRaman

You can see a workaround here: https://github.com/substratusai/model-falcon-7b-instruct/blob/430cf5dfda02c0359122d4ef7f9b6d0c01bb3b39/src/train.ipynb

Effectively I reload the base model in 16 bit to work around the issue. It works fine for my use case.

The link is broken

larawehbe avatar Aug 03 '23 09:08 larawehbe

You can see a workaround here: https://github.com/substratusai/model-falcon-7b-instruct/blob/430cf5dfda02c0359122d4ef7f9b6d0c01bb3b39/src/train.ipynb

Effectively I reload the base model in 16 bit to work around the issue. It works fine for my use case.

Same here, the link is broken. can pls re-share the link?

xpang-sf avatar Aug 03 '23 22:08 xpang-sf

New link: https://github.com/substratusai/images/blob/main/model-trainer-huggingface/src/train.ipynb

samos123 avatar Aug 04 '23 00:08 samos123

New link: https://github.com/substratusai/images/blob/main/model-trainer-huggingface/src/train.ipynb

Thank you so much!

xpang-sf avatar Aug 04 '23 01:08 xpang-sf

New link: https://github.com/substratusai/images/blob/main/model-trainer-huggingface/src/train.ipynb

May I ask another baby question: this is training code and saving model code. After model is saved, did you test to load the saved model and do inference and check whether generated results are good? If so, do you have this separated inference/generating script? Thanks a lot in advance.

xpang-sf avatar Aug 04 '23 05:08 xpang-sf

Yes in substratus.ai we separate model loading, finetuning and serving in separate images. I did check whether the finetuned model provided different results and it did.

In the notebook that I linked, the following paths are used

# original base model e.g. falcon-7b
model_path = "/content/saved-model/"
# path of final finetuned model (merged model)
trained_model_path = "/content/model"

samos123 avatar Aug 04 '23 06:08 samos123

Yes in substratus.ai we separate model loading, finetuning and serving in separate images. I did check whether the finetuned model provided different results and it did.

In the notebook that I linked, the following paths are used

# original base model e.g. falcon-7b
model_path = "/content/saved-model/"
# path of final finetuned model (merged model)
trained_model_path = "/content/model"

Thank you so much, I will try on my side and let you know.

xpang-sf avatar Aug 04 '23 17:08 xpang-sf

I am worried about whether this quick fix would be harmful to the model's ability? Is there any other way to fix this problem?

Rem1L avatar Aug 08 '23 01:08 Rem1L

Anyone found a way to solve this without loading the model in 16bit? My GPU cannot load whole Falcon-40B (in 16-bit), using device_map=auto and offload_folder causes the python code to be Killed. Loading the base model in 4 bit mode and merging with LORA adapters still fail with "Cannot merge LORA layers in 8-bit mode"

I found a way that works in my case I hope it works, the problems working with Llama2 in terms of training time, inference time while we just have one (not big memory) GPU can be splited into 3 different parts which I will go through each of them.

  • Training: It is not possible to perform pure 8bit or 4bit training so by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them we can fine tune Llama2 model. prepare_model_for_kbit_training plays an important role for preparing model for training.
  • Merging: on the other hand for merging we can merge our LoRa Layers to Llama2 even on CPU. here is sample template:
llama_model = AutoModel.from_pretrained(model_name, 
                        device_map={"": "cpu"}, 
                        torch_dtype=torch.float16)

model = PeftModel.from_pretrained(llama_model, 
                        peft_model_path, 
                        torch_dtype=torch.float16, 
                        device_map={"": "cpu"})

model = model.merge_and_unload()

model.save_pretrained("merged-model")
  • Inference Time: for loading now you can just need to read model from merged-model path, you can load model in 8bit or whatever needs. here is sample template:
model_for_infer = AutoModel.from_pretrained("merged-model",
                        device_map="auto",
                        load_in_8bit=True)

hamidahmadian avatar Sep 27 '23 09:09 hamidahmadian

While not the most elegant solution, @hamidahmadian solution works for me.

samuelhkahn avatar Sep 28 '23 19:09 samuelhkahn

I have an issue with this approach when I add a special token. Anyone figure out a way to do that? Code I'm using:

base_model = AutoModelForCausalLM.from_pretrained(
            model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True)
base_model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(base_model, trained_model_path_lora, torch_dtype=torch.float16)
model = model.merge_and_unload()

error observed:

>> model = PeftModel.from_pretrained(base_model, trained_model_path_lora, torch_dtype=torch.float16)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1158, in Module.to.<locals>.convert(t)
   1155 if convert_to_format is not None and t.dim() in (4, 5):
   1156     return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
   1157                 non_blocking, memory_format=convert_to_format)
-> 1158 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

NotImplementedError: Cannot copy out of meta tensor; no data!

samos123 avatar Oct 19 '23 18:10 samos123

On my laptop with 16GB RAM + 16GB VRAM, @hamidahmadian's solution allows me to load the models, but still gives me an OOM error when doing merge_and_unload(). However, the following works for me:

llama_model = AutoModelForCausalLM.from_pretrained(model_name, device_map={"": "cpu"}, torch_dtype=torch.float16)
model = PeftModel.from_pretrained(llama_model, peft_model_path)
model = model.merge_and_unload()

eternitybt avatar Nov 15 '23 16:11 eternitybt

This should be the correct way to fix the issue:

peft_model = model

# When you execute the commonly used `model = model.merge_and_unload()`, the error `Cannot merge LORA layers when the model is loaded in 8-bit mode` occurs. The reason is that the base model was loaded as 4 bit. Therefore, the base model must be reloaded as 16 bit.
model = AutoModelForCausalLM.from_pretrained(
    MODEL_NAME,
    torch_dtype=torch.float16,
    load_in_8bit=False,
    device_map="auto",
    trust_remote_code=True,
)
from peft import PeftModel

peft_model = PeftModel.from_pretrained(model, NEW_MODEL_PATH)
merged_model = peft_model.merge_and_unload()


praharshbhatt avatar Jun 21 '24 04:06 praharshbhatt