intel-extension-for-transformers icon indicating copy to clipboard operation
intel-extension-for-transformers copied to clipboard

What is the best way for the inference process in LORA in PEFT approach

Open pradeepdev-1995 opened this issue 1 year ago • 1 comments

Here is the SFTtrainer method i used for finetuning mistral

trainer = SFTTrainer(
    model=peft_model,
    train_dataset=data,
    peft_config=peft_config,
    dataset_text_field=" column name",
    max_seq_length=3000,
    tokenizer=tokenizer,
    args=training_arguments,
    packing=packing,
)
trainer.train()

I found different mechanisms for the finetuned model inference after PEFT based LORA finetuning

Method - 1

save adapter after completing training and then merge with base model then use for inference

trainer.model.save_pretrained("new_adapter_path")
from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
                                  new_adapter_path,
                                  torch_dtype=torch.float16,
                                  is_trainable=False,
                                  device_map="auto"
                                  )
finetuned_model = finetuned_model.merge_and_unload()

Method - 2

save checkpoints during training and then use the checkpoint with the least loss

from peft import PeftModel
finetuned_model = PeftModel.from_pretrained(base_model,
                                  "least loss checkpoint path",
                                  torch_dtype=torch.float16,
                                  is_trainable=False,
                                  device_map="auto"
                                  )
finetuned_model = finetuned_model.merge_and_unload()

Method - 3

same method with AutoPeftModelForCausalLM class

model = AutoPeftModelForCausalLM.from_pretrained(
    "output directory checkpoint path",
    low_cpu_mem_usage=True,
    return_dict=True,
    torch_dtype=torch.float16,
    device_map="cuda")
finetuned_model = finetuned_model.merge_and_unload()

Method-4

AutoPeftModelForCausalLM class specifies the output folder without specifying a specific checkpoint

instruction_tuned_model = AutoPeftModelForCausalLM.from_pretrained(
    training_args.output_dir,
    torch_dtype=torch.bfloat16,
    device_map = 'auto',
    trust_remote_code=True,
)
finetuned_model = finetuned_model.merge_and_unload()

Method-5 All the above methods without merging

#finetuned_model = finetuned_model.merge_and_unload()

Which is the actual method I should follow for inference? and when to use which method over another?

pradeepdev-1995 avatar Dec 29 '23 09:12 pradeepdev-1995

Hi @pradeepdev-1995 , method 1-4 basically are same methods, they all merge lora's weight to base model, thus resulting model will have same architecture with the model before finetuning except for different weight. Method 5 won't merge lora's weight into base model, thus it will be somewhat slower than method 1-4 during inference.

XinyuYe-Intel avatar Jan 25 '24 01:01 XinyuYe-Intel