peft
peft copied to clipboard
Is it possible to "unload" the PEFT LoRA weights after mutating the base model with PeftModel.from_pretrained ?
Is it possible to "unload" the PEFT LoRA weights after mutating the base model with PeftModel.from_pretrained?
I'd like to load multiple LoRA models on top of a base model, and unloading the whole base model every time is time consuming. Was wondering if there's a way to un-load the PEFT model and have the base model remain in memory.
You may try model.disable_adapter_layers()
or set merge_weights=false
in peft_config
.
After disable_adapter_layers()
, according to the code, if the original weights were modified, after a forward pass it will be reverted. Or if you put merge_weights=false
the weights will not be modified.
Disclaimer: I am new to this lib, just based on reading the code.
Hello @lxe , recent PR #227 should enable this, could you please try the main branch and let us know if that solves this issue?
@pacman100 Does the PR allow one to swap LoRA adapters as well ? I would like to be able to load and keep a single base model in memory and swap between several LoRA trained adapters on-the-fly. (GPT2 if that matters)
I tested the main branch as of a few days ago and I wasn't able to. Had some error when trying to load another adapter after using disable_adapter_layers()
. I'm new to the library so maybe I'm doing something wrong.
Hello @accountForIssues, I'm currently working on it, will ping here when the PR is ready
Very awaited feature!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
This is now fixed in the multi adapter loading! Closing.
Hi, @lxe. I'm new to peft. The model.merge_and_unload() method looks like magic to me. Could you give a quick introduction about model.merge_and_unload()?
In my view, LoRA adds new trainable parameters/layers and inserts these layers into the base model, that is the LoRA model has additional structures on top of the base model. And we can save the merge_and_unload model and reload it with base_model.from_pretrained(unloaded_model_path) interface. But where are the additional layers and parameters?