LLaVA
LLaVA copied to clipboard
[Question] How to do a 2nd finetune on a fintuned checkpoints with lora?
Question
I have used train_mem.py with lora for the initial finetuning, similar to the question. Now, I want to perform another round of finetuning on a different dataset while starting from the checkpoint of the first finetune. How can I accomplish this?
Here's what I tried:
-
Go to
/pvcvolume/LLaVA/llava/train/train.py
-
Find
TrainingArguments
class and add one more argument:lora_path: str = field(default=None, metadata={"help": "Path to the previous lora folder."})
-
Go back to
/pvcvolume/LLaVA/llava/train/train.py
line 840, and add
if training_args.lora_path:
model = PeftModel.from_pretrained(model, training_args.lora_path)
else:
model = get_peft_model(model, lora_config) # Model is defined here!
- Add the
lora_path
in your training.sh
script.
@LeoLee7 I have finetuned first LLaVA-7b with my own dataset. I founld model is bad. So I want to perform another round of finetuning on another dataset while starting from the checkpoint of the first finetune.(same task) Can I use your method to do this??? Thank!!!