jocastrocUnal

Results 7 comments of jocastrocUnal

I hope in the future this code could work... its more natural. ```python model = peft_model.merge_and_unload() model.save_pretrained("/model/trained") ```

same here ![image](https://github.com/PromtEngineer/localGPT/assets/93233286/63ab95b5-960d-4382-8517-2b747c78a65f)

> I also have the same problem. The loss first decreases, then it slowly grows till it drops to zero. The image below shows the training loss in the first...

same here ![image](https://github.com/artidoro/qlora/assets/93233286/57835276-e127-4e86-9421-dc801de57361)

In the training arguments I set `resume_from_checkpoint = True`. But this is for huggingface. https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments