OpenPCDet
OpenPCDet copied to clipboard
Why does using ground truth as prediction does not yield 100%?
Hi, testing your evaluation with a random subset of the ground truth from KITTI as prediction yields strange results, far from the expected 100% AP. On some random two frames I obtain this:
('Car [email protected], 0.70, 0.70:\n'
'bbox AP:9.0909, 9.0909, 9.0909\n'
'bev AP:0.0000, 0.0000, 0.0000\n'
'3d AP:0.0000, 0.0000, 0.0000\n'
'aos AP:9.09, 9.09, 9.09\n'
'Car [email protected], 0.70, 0.70:\n'
'bbox AP:2.5000, 2.5000, 5.0000\n'
'bev AP:0.0000, 0.0000, 0.0000\n'
'3d AP:0.0000, 0.0000, 0.0000\n'
'aos AP:2.50, 2.50, 5.00\n'
'Car [email protected], 0.50, 0.50:\n'
'bbox AP:9.0909, 9.0909, 9.0909\n'
'bev AP:0.0000, 0.0000, 0.0000\n'
'3d AP:0.0000, 0.0000, 0.0000\n'
'aos AP:9.09, 9.09, 9.09\n'
'Car [email protected], 0.50, 0.50:\n'
'bbox AP:2.5000, 2.5000, 5.0000\n'
'bev AP:0.0000, 0.0000, 0.0000\n'
'3d AP:0.0000, 0.0000, 0.0000\n'
'aos AP:2.50, 2.50, 5.00\n',
{'Car_3d/easy_R40': 0.0,
'Car_3d/hard_R40': 0.0,
'Car_3d/moderate_R40': 0.0,
'Car_aos/easy_R40': 2.5,
'Car_aos/hard_R40': 5.0,
'Car_aos/moderate_R40': 2.5,
'Car_bev/easy_R40': 0.0,
'Car_bev/hard_R40': 0.0,
'Car_bev/moderate_R40': 0.0,
'Car_image/easy_R40': 2.5,
'Car_image/hard_R40': 5.0,
'Car_image/moderate_R40': 2.5})
I use the suggested python script from the README.md and two original KITTI labels as ground truth as well as as prediction. Some quick reproduction of my computation can be done using the file from this commit from my fork, where I included the used subset of the ground truth.
I'd be happy to understand why I don't simply obtain 100% for all detection.
3D detector cannot overfit to a small subset without some specific modification. Just try to validate on the whole training set.
3D detector cannot overfit to a small subset without some specific modification. Just try to validate on the whole training set.
I am not using any detector. I just want to see that the evaluation works as expected by using the ground truth as my predictions.
I met the same issue, did you find out what caused it?
I met the same issue, did you find out what caused it?
Unfortunately not, I started using another evaluation kit.
Have you increased the amount of data for evaluation, like using 100 frames instead of 1 frame. I am trying.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.