peft
peft copied to clipboard
Add `save_base_model=True` attribute to `save_pretrained` method
Currently, when calling model.save_pretrained, only the adapter weights are stored and not the frozen base_model. Would it make sense to add a kwargs parameter, .e.g save_base_model to also save the base_model weights for easier offline usage, e.g. for deployment?
I'm using diffusers, it would be very helpful to save the base_model + lora_model together. But it is not supported yet, as LoRAmodel doesn't have save_pretrained.
@haofanwang before we have an integration you can add model.base_model.save_pretrained() at the end of your training.
Thanks for quick response. Let's take train_dreambooth.py for instance.
The LoRA is saved via
accelerator.save(state_dict, os.path.join(args.output_dir, f"{args.instance_prompt}_lora.pt"))
with open(os.path.join(args.output_dir, f"{args.instance_prompt}_lora_config.json"), "w") as f:
json.dump(lora_config, f)
How should I save other modules? Can I just merge the LoRA weights into base model so that I can load the model in one line as pipeline = DiffusionPipeline.from_pretrained(base_path, torch_dtype=torch.float16)? @philschmid
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
I need exactly that feature but from this thread it's not entirely clear to me how to save the full, finetuned model (base_model + adapter_model). Here is my according colab (saves only the adapter weights).