peft
peft copied to clipboard
Resume the train of a LoRA
Hi everyone,
Is there any example showing how to resume the training of a LoRA, if it is possible?
Yes! I am also wondering when will we have such a tutorial! It will be of great use.
Is there still wip or was it postponed ? :)
Hi everyone, this feature has recently been introduced in HF trainer here: https://github.com/huggingface/transformers/pull/24274 - you can benefit from this feature if you install transformers
from source . Examples are attached to that PR but the TLDR is that you should call trainer.train(resume_from_checkpoint=True)
and make sure you have already trained a model using HF trainer on the same working folder.
Closing the issue, feel free to re-open if you think that this has not been addressed !
Thanks