Continue finetuning from a checkpoint
I have trained model on my custom dataset with the -t option. Now I have some new data and I wish to continue to finetune on the previous finetuned checkpoint. Should I use the -r option or -t option followed by path to my checkpoint?
r means resume training from checkpoint, (including model.state_dict, ema.state_dict, optimizer.state_dict, etc
t means finetune base on checkpoint. ( only load ema.state_dict into model.state_dict
rmeansresume trainingfrom checkpoint, (includingmodel.state_dict,ema.state_dict,optimizer.state_dict, etctmeansfinetunebase on checkpoint. ( only loadema.state_dictintomodel.state_dict
Hi, thank you for your great work. I also want to fine-tune the model from the pre-train on my custom dataset, for example, rtdetr_r50vd_6x_coco_from_paddle.pth. However, my custom dataset has fewer classes (for example, I selected 20 classes from 80 classes of COCO). Can I load the pre-train and fine-tune it on my custom dataset?