FCOS.Pytorch
FCOS.Pytorch copied to clipboard
central sampling
Could you please tell me how to remove central sampling? Thanks!
don't need remove it, I fix the bug then it can work well.
don't need remove it, I fix the bug then it can work well.
Thanks! I train the FCOS on the voc dataset, and the final mAP is about 0.67. I use the same strategy like groupnorm and giou loss to train RetinaNet, and the final mAP is almost 0.69. Do you know the reason?
High mAP in papers always due to many tricks and data augmentation. My implementation doesn't use any data augmentation, which is very important for getting a good result. And batch size is another important factor, usually the larger the better, otherwise, the larger batch size means longer training time.
High mAP in papers always due to many tricks and data augmentation. My implementation doesn't use any data augmentation, which is very important for getting a good result. And batch size is another important factor, usually the larger the better, otherwise, the larger batch size means longer training time.
Yeah, I changed your dataloader file and add some data augmentation like flip and crop. I just don't know why RetinaNet is better than fcos. Really thanks your reply!😃
It's really not easy to train a network. The difference between 0.67 and 0.69 is really small, and they are both not good result on VOCdataset. Did you train it on VOC07+12 then eval on VOC07-test? I think about 0.75 mAP is a more normal result.
It's really not easy to train a network. The difference between 0.67 and 0.69 is really small, and they are both not good result on VOCdataset. Did you train it on VOC07+12 then eval on VOC07-test? I think about 0.75 mAP is a more normal result.
No, I just train the model on VOC12_train dataset and eval on VOC07_val dataset. And I still use resnet50 as my backbone. 0.75 mAP is much better than the performance of my model.
A common approach is to train on VOC07 and VOC12 train+val, then eval on VOC07 test. The dataset setting you used is rare. 0.02 mAP gap can not measure good or bad. This only shows that they have similar performance. But fcos has a simpler structure, which save more memory during training.
A common approach is to train on VOC07 and VOC12 train+val, then eval on VOC07 test. The dataset setting you used is rare. 0.02 mAP gap can not measure good or bad. This only shows that they have similar performance. But fcos has a simpler structure, which save more memory during training.
hhhhh, thanks! I will do more experiments.
A common approach is to train on VOC07 and VOC12 train+val, then eval on VOC07 test. The dataset setting you used is rare. 0.02 mAP gap can not measure good or bad. This only shows that they have similar performance. But fcos has a simpler structure, which save more memory during training.
By the way, how many epoches are recommended for the whole VOC dataset?(VOC07 and VOC12 train+val VOC07 test.)
It depends on whether you train it from scratch and how large the batch size is. You can set epochs up a little bigger,such as 50 or 100, and then eval model after each round, finally, the best model is selected.
It depends on whether you train it from scratch and how large the batch size is. You can set epochs up a little bigger,such as 50 or 100, and then eval model after each round, finally, the best model is selected.
really thanks.