second.pytorch icon indicating copy to clipboard operation
second.pytorch copied to clipboard

0 mAP for Nuscenes eval?

Open deeptir18 opened this issue 5 years ago • 7 comments

Hi, I'm trying to train a model from scratch on the nuscenes data using the nuscenes/all.pp.lowa.config (nothing changed). I might have not trained the model for long enough (only ~ 13000 steps so far), but I'm a little suspicious of the results so far, because it repeatedly shows 0 average precision at all theshold levels for all classes (it doesn't find any true positive matches between any of the predictions).

The model does seem to identify boxes reasonably from inspecting the inference on one of the files manually -- do I just need to train it for more steps before I see a non zero average precision value for any of the classes?

What config file should be used with the nuscenes model checkpoint provided? I tried all the available configs, but none seem to work. I'd like to check that the evaluation works, if possible, with a pretrained model.

Thank you!

deeptir18 avatar Aug 16 '19 22:08 deeptir18

My mAP was also 0 when lr_max was 1.5e-4, but when I set it to 3e-3, it worked. Unfortunately I still couldn't reproduce the author's pointpillar result. Second is also a disaster.

qchenclaire avatar Aug 20 '19 00:08 qchenclaire

@qchenclaire How long it takes for you to train the model in total? For me with the largea.config, I just started training from scratch, and spent ~4 hours for the first 2 epochs.

HenryJunW avatar Nov 15 '19 01:11 HenryJunW

@HenryJunW it should not be so long, but which config do you use, My results are far away from the results in leaderboard

tjucwb avatar Feb 19 '20 15:02 tjucwb

@HenryJunW Have you get the results now? my results AP2.0 on val dataset is just 49%, far away from the author get

tjucwb avatar Feb 19 '20 15:02 tjucwb

@HenryJunW Have you get the results now? my results AP2.0 on val dataset is just 49%, far away from the author get

@tjucwb Could you explain what the numbers [email protected], 1.0, 2.0, 4.0 represent?

vatsal-shah avatar Apr 27 '20 16:04 vatsal-shah

@HenryJunW Have you get the results now? my results AP2.0 on val dataset is just 49%, far away from the author get

@tjucwb Could you explain what the numbers [email protected], 1.0, 2.0, 4.0 represent?

This site really helps: https://blog.zenggyu.com/en/post/2018-12-16/an-introduction-to-evaluation-metrics-for-object-detection/

triasamo1 avatar Mar 07 '21 12:03 triasamo1

@tjucwb I am using https://github.com/traveller59/second.pytorch/blob/master/second/configs/nuscenes/all.pp.mhead.config. I forgot the specific numbers for AP2.0, but the mAP is 29.5 . You can refer to our paper https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550409.pdf.

HenryJunW avatar Mar 10 '21 04:03 HenryJunW