OpenPCDet icon indicating copy to clipboard operation
OpenPCDet copied to clipboard

Why does using ground truth as prediction does not yield 100%?

Open marcelbrucker opened this issue 3 years ago • 5 comments

Hi, testing your evaluation with a random subset of the ground truth from KITTI as prediction yields strange results, far from the expected 100% AP. On some random two frames I obtain this:

('Car [email protected], 0.70, 0.70:\n'
 'bbox AP:9.0909, 9.0909, 9.0909\n'
 'bev  AP:0.0000, 0.0000, 0.0000\n'
 '3d   AP:0.0000, 0.0000, 0.0000\n'
 'aos  AP:9.09, 9.09, 9.09\n'
 'Car [email protected], 0.70, 0.70:\n'
 'bbox AP:2.5000, 2.5000, 5.0000\n'
 'bev  AP:0.0000, 0.0000, 0.0000\n'
 '3d   AP:0.0000, 0.0000, 0.0000\n'
 'aos  AP:2.50, 2.50, 5.00\n'
 'Car [email protected], 0.50, 0.50:\n'
 'bbox AP:9.0909, 9.0909, 9.0909\n'
 'bev  AP:0.0000, 0.0000, 0.0000\n'
 '3d   AP:0.0000, 0.0000, 0.0000\n'
 'aos  AP:9.09, 9.09, 9.09\n'
 'Car [email protected], 0.50, 0.50:\n'
 'bbox AP:2.5000, 2.5000, 5.0000\n'
 'bev  AP:0.0000, 0.0000, 0.0000\n'
 '3d   AP:0.0000, 0.0000, 0.0000\n'
 'aos  AP:2.50, 2.50, 5.00\n',
 {'Car_3d/easy_R40': 0.0,
  'Car_3d/hard_R40': 0.0,
  'Car_3d/moderate_R40': 0.0,
  'Car_aos/easy_R40': 2.5,
  'Car_aos/hard_R40': 5.0,
  'Car_aos/moderate_R40': 2.5,
  'Car_bev/easy_R40': 0.0,
  'Car_bev/hard_R40': 0.0,
  'Car_bev/moderate_R40': 0.0,
  'Car_image/easy_R40': 2.5,
  'Car_image/hard_R40': 5.0,
  'Car_image/moderate_R40': 2.5})

I use the suggested python script from the README.md and two original KITTI labels as ground truth as well as as prediction. Some quick reproduction of my computation can be done using the file from this commit from my fork, where I included the used subset of the ground truth.

I'd be happy to understand why I don't simply obtain 100% for all detection.

marcelbrucker avatar Aug 16 '22 21:08 marcelbrucker

3D detector cannot overfit to a small subset without some specific modification. Just try to validate on the whole training set.

jihanyang avatar Aug 17 '22 01:08 jihanyang

3D detector cannot overfit to a small subset without some specific modification. Just try to validate on the whole training set.

I am not using any detector. I just want to see that the evaluation works as expected by using the ground truth as my predictions.

marcelbrucker avatar Aug 17 '22 02:08 marcelbrucker

I met the same issue, did you find out what caused it?

yimingshan666 avatar Aug 29 '22 22:08 yimingshan666

I met the same issue, did you find out what caused it?

Unfortunately not, I started using another evaluation kit.

marcelbrucker avatar Aug 30 '22 12:08 marcelbrucker

Have you increased the amount of data for evaluation, like using 100 frames instead of 1 frame. I am trying.

yimingshan666 avatar Aug 30 '22 12:08 yimingshan666

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] avatar Sep 30 '22 02:09 github-actions[bot]

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Oct 15 '22 02:10 github-actions[bot]