VAC_CSLR icon indicating copy to clipboard operation
VAC_CSLR copied to clipboard

Finetuning and continue training

Open khoangothe opened this issue 2 years ago • 1 comments

Hello, Thank you for the awesome work. I am trying to use the model on another dataset, so I figure I should structure my data accordingly to the format of phoenix2014. Is there anything else I should worry about or just running the preprocessing with the same structure is gonna be alright?

Also, since I am training on google colab, I won't be able to train for 80 epochs consecutively and plan to split it into several different runs. Is there a built in function to load the previous model and continue training (or finetuning, if I want to finetune the pretrain) or how should I begin to tackle this problem? I am not sure if --load-weights tag is enough. Thank you so much.

khoangothe avatar Jul 13 '22 01:07 khoangothe

Thanks for your attention, If your resolusion of video data is pretty high, perhaps a human detection can preserve more useful information before resizing the whole image. Our recent version can achieve comparable results with 40 epochs, and --load-checkpoints can load the previous model and continue training. Details can be found in config and here.

Good luck~

ycmin95 avatar Jul 13 '22 02:07 ycmin95