nanodet
nanodet copied to clipboard
Training nano-plus-m with backbone shufflenetv2_1.0x from scratch get significantly lower APs
Hi, I train nano-plus-m (image size is 384*384) with default settings and get 0.262 AP at 300 epoch and 0.251 AP at 90 epoch.
However, when I train from scratch (set pretrain=False in yaml), I get 0.213 AP at 90 epoch.
Have you try to train those models from scratch? How are those results?
-
with pretrained weights
-
without pretrained weights
For tiny backbones, pre-training is very important.
For tiny backbones, pre-training is very important.
@RangiLyu Thx for your reply. I will pre-train the backbone before using it on target datasets.
I also read the tips from yolov5: Tips-for-Best-Training-Results. However it recommends to pre-train or not according to dataset scale instead of model size. How do you like this?
