jocastrocUnal
jocastrocUnal
Same problem here
I hope in the future this code could work... its more natural. ```python model = peft_model.merge_and_unload() model.save_pretrained("/model/trained") ```
I like that to :)
same here 
> I also have the same problem. The loss first decreases, then it slowly grows till it drops to zero. The image below shows the training loss in the first...
same here 
In the training arguments I set `resume_from_checkpoint = True`. But this is for huggingface. https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments