RT-DETR icon indicating copy to clipboard operation
RT-DETR copied to clipboard

Continue finetuning from a checkpoint

Open tywei08 opened this issue 1 year ago • 2 comments

I have trained model on my custom dataset with the -t option. Now I have some new data and I wish to continue to finetune on the previous finetuned checkpoint. Should I use the -r option or -t option followed by path to my checkpoint?

tywei08 avatar Jul 10 '24 15:07 tywei08

r means resume training from checkpoint, (including model.state_dict, ema.state_dict, optimizer.state_dict, etc t means finetune base on checkpoint. ( only load ema.state_dict into model.state_dict

lyuwenyu avatar Jul 11 '24 02:07 lyuwenyu

r means resume training from checkpoint, (including model.state_dict, ema.state_dict, optimizer.state_dict, etc t means finetune base on checkpoint. ( only load ema.state_dict into model.state_dict

Hi, thank you for your great work. I also want to fine-tune the model from the pre-train on my custom dataset, for example, rtdetr_r50vd_6x_coco_from_paddle.pth. However, my custom dataset has fewer classes (for example, I selected 20 classes from 80 classes of COCO). Can I load the pre-train and fine-tune it on my custom dataset?

hiepbk avatar Aug 08 '24 02:08 hiepbk