kitti-object-eval-python
kitti-object-eval-python copied to clipboard
I don't understand the meaning of overlap_mod and overlap easy in get_official_eval_result
In the official benchmark, for cars the overlap is supposed to be 70%. Which line corresponds to the official benchmark as given here?
@brunorigal Do you figure it out now? I don't understand them either. There are also two groups of results for each class, e.g. Car AP(Average Precision)@0.70, 0.70, 0.70: bbox AP:100.00, 100.00, 100.00 bev AP:2.69, 1.97, 2.40 3d AP:1.82, 1.23, 1.46 Car AP(Average Precision)@0.70, 0.50, 0.50: bbox AP:100.00, 100.00, 100.00 bev AP:12.77, 12.40, 11.82 3d AP:11.19, 8.34, 10.00
Why are there two groups of evaluation results and three thresholds for each?
If you look at the code; there are 3 IOUs for each moderate and easy class. So the two groups correspond to (moderate and easy) overlap conditions. And within each metric there are 3 IOU thresholds for each class (for example 0.7, 0.5, 0.5 in this case) I think it is there so you can modify it easily to your requirement....but I agree it is super ambiguous
@traveller59 kindly provide some clarification on these metrics.