peft
peft copied to clipboard
model merge_and_unload do not support layer_replication
System Info
when you trained a model with layer_replication in LoRAConfigļ¼you will find that can not be merge to the base model in a right way
Who can help?
No response
Information
- [x] The official example scripts
- [x] My own modified scripts
Tasks
- [x] An officially supported task in the
examplesfolder - [x] My own task or dataset (give details below)
Reproduction
just set layer_replication in LoraConfig, train a sample lora and merge it to the base model
Expected behavior
generate a modeling_config.py script that can work properly with "layer_replication"
This is not easily possible. The reason is that those replicated layers share the underlying base weights between multiple layers. Therefore, we cannot merge LoRA weights, as different LoRA weights would be merged into the base weights, resulting in incorrect outputs.
whatĀ IĀ meanĀ isĀ whenĀ IĀ createĀ aĀ loraĀ withĀ layer_replicationĀ forĀ expandingĀ Ā blocksĀ fromĀ 22Ā toĀ 32ļ¼IĀ mergeĀ andĀ saveĀ itļ¼butĀ whenĀ IĀ loadĀ theĀ outputĀ modelļ¼IĀ findĀ thereĀ areĀ onlyĀ 22Ā blocksĀ inĀ theĀ finalĀ model.
---Original--- From: "Benjamin @.> Date: Mon, May 6, 2024 17:54 PM To: @.>; Cc: "Xinlu @.@.>; Subject: Re: [huggingface/peft] model merge_and_unload do not supportlayer_replication (Issue #1707)
This is not easily possible. The reason is that those replicated layers share the underlying base weights between multiple layers. Therefore, we cannot merge LoRA weights, as different LoRA weights would be merged into the base weights, resulting in incorrect outputs.
ā Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
What I mean is when I create a lora with layer_replication for expanding blocks from 22 to 32, I merge and save it, but when I load the output model, I find there are only 22 blocks in the final model.
As mentioned, merging with layer replication isn't really possible.
Also, when you load the model, make sure that you first load the base model, then the LoRA adapter using PeftModel.from_pretrained(...). This should restore the replicated layers.
sorryļ¼whatĀ IĀ meanĀ isĀ thatĀ IĀ wantĀ toĀ loadĀ theĀ baseĀ modelĀ andĀ theĀ loraĀ modelĀ thenĀ merge_and_unloadļ¼getĀ aĀ Ā newĀ 1.5BĀ modelĀ withĀ 32blockļ¼notĀ aĀ originalĀ 1BĀ modelĀ withĀ 22block.
---Original--- From: "Benjamin @.> Date: Mon, May 6, 2024 18:27 PM To: @.>; Cc: "Xinlu @.@.>; Subject: Re: [huggingface/peft] model merge_and_unload do not supportlayer_replication (Issue #1707)
What I mean is when I create a lora with layer_replication for expanding blocks from 22 to 32, I merge and save it, but when I load the output model, I find there are only 22 blocks in the final model.
As mentioned, merging with layer replication isn't really possible.
Also, when you load the model, make sure that you first load the base model, then the LoRA adapter using PeftModel.from_pretrained(...). This should restore the replicated layers.
ā Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
what I mean is that I want to load the base model and the lora model then merge_and_unloadļ¼get a new 1.5B model with 32 blockļ¼not a original 1B model with 22 block.
This is not really an option right now with PEFT. I guess what you could try is to create clones of the weights that are currently being shared, edit the adapter_config.json to remove the layer_replication entry, then try to load the LoRA adapter and try if merge_and_onload works.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.