mxnet-ssd icon indicating copy to clipboard operation
mxnet-ssd copied to clipboard

The mAP on validation is very low!

Open Angzz opened this issue 6 years ago • 11 comments

I train the model from scratch using my own dataset, and the training process is very normal, but when use MAPMetric after each forward, the mAP is very low, about 0.0XX%, why? the loss is very low, about 0.00XX when num_epochs above 200, can you help me, than you!! @zhreshold

Angzz avatar Mar 30 '18 02:03 Angzz

I would use training data as validation set and see if the MAP is still very low. If true, then there must be some problems with the data. Otherwise, it is overfitting badly to training data.

zhreshold avatar Mar 30 '18 21:03 zhreshold

thank you, I will have a try.

Angzz avatar Apr 01 '18 02:04 Angzz

any update on this, I am also seeing very poor mAP after over 1000 epochs. I also tried using training data for validation and it was still very low.

alexschultz avatar Apr 04 '18 20:04 alexschultz

I've keep getting exactly 0 mAP when running the evaluation script regardless of the model used, I've tried a few pre-trained models and my own self trained model (vgg_reduced model as base, test mAP around 70% after 240 epochs of 128 batch size). Any ideas for what could be the issue? I couldn't find other relevant issues

matanhs avatar Jul 02 '18 14:07 matanhs

@matanhs I am not getting your point, do you mean the the pre-trained models downloaded getting 0 mAP?

zhreshold avatar Jul 02 '18 17:07 zhreshold

Essentially... it doesn't matter if I use the pre-trained weights or a model I've trained my self when using the provided evaluate.py script, any suggestions? The second part of the comment regards the training getting stuck at 70% mAP when using default settings (reduced vgg) and voc07-12, I'm not sure how to get the additional 7.8% Thanks!

update: I've been using the --deploy flag to avoid the model assembly and just load the model from the file.

matanhs avatar Jul 02 '18 19:07 matanhs

Regarding the low mAP scores people reported, I've noticed similar behavior when using the wrong input size for the model (e.g. testing resnet50 without --data-shape 512) as the default size is set to 300. You may want to enforce input size matching to avoid these issues.

matanhs avatar Jul 03 '18 08:07 matanhs

I train the model from scratch using my own dataset, and the training process is very normal, but when use MAPMetric after each forward, the mAP is very low, about 0.0XX%, why? the loss is very low, about 0.00XX when num_epochs above 200, can you help me, than you!! @zhreshold

Have you fix it? I get the same question.Any advise?

Jessespace avatar Nov 16 '18 10:11 Jessespace

Sorry, I remembered that I was constructing DSSD based on SSD, and the reason maybe my implementation is not right, and my common debug way is : drawing the detections of the model on the original image to see if there is something unusual.

Angzz avatar Nov 19 '18 01:11 Angzz

@Jessespace By the way, you can try gluon-cv, it's much convenient and powerful.

Angzz avatar Nov 19 '18 01:11 Angzz

@Jessespace By the way, you can try gluon-cv, it's much convenient and powerful.

Thanks,

Jessespace avatar Nov 20 '18 03:11 Jessespace