mxnet-ssd
mxnet-ssd copied to clipboard
The mAP on validation is very low!
I train the model from scratch using my own dataset, and the training process is very normal, but when use MAPMetric after each forward, the mAP is very low, about 0.0XX%, why? the loss is very low, about 0.00XX when num_epochs above 200, can you help me, than you!! @zhreshold
I would use training data as validation set and see if the MAP is still very low. If true, then there must be some problems with the data. Otherwise, it is overfitting badly to training data.
thank you, I will have a try.
any update on this, I am also seeing very poor mAP after over 1000 epochs. I also tried using training data for validation and it was still very low.
I've keep getting exactly 0 mAP when running the evaluation script regardless of the model used, I've tried a few pre-trained models and my own self trained model (vgg_reduced model as base, test mAP around 70% after 240 epochs of 128 batch size). Any ideas for what could be the issue? I couldn't find other relevant issues
@matanhs I am not getting your point, do you mean the the pre-trained models downloaded getting 0 mAP?
Essentially... it doesn't matter if I use the pre-trained weights or a model I've trained my self when using the provided evaluate.py script, any suggestions? The second part of the comment regards the training getting stuck at 70% mAP when using default settings (reduced vgg) and voc07-12, I'm not sure how to get the additional 7.8% Thanks!
update: I've been using the --deploy flag to avoid the model assembly and just load the model from the file.
Regarding the low mAP scores people reported, I've noticed similar behavior when using the wrong input size for the model (e.g. testing resnet50 without --data-shape 512) as the default size is set to 300. You may want to enforce input size matching to avoid these issues.
I train the model from scratch using my own dataset, and the training process is very normal, but when use MAPMetric after each forward, the mAP is very low, about 0.0XX%, why? the loss is very low, about 0.00XX when num_epochs above 200, can you help me, than you!! @zhreshold
Have you fix it? I get the same question.Any advise?
Sorry, I remembered that I was constructing DSSD based on SSD, and the reason maybe my implementation is not right, and my common debug way is : drawing the detections of the model on the original image to see if there is something unusual.
@Jessespace By the way, you can try gluon-cv, it's much convenient and powerful.
@Jessespace By the way, you can try gluon-cv, it's much convenient and powerful.
Thanks,