second.pytorch icon indicating copy to clipboard operation
second.pytorch copied to clipboard

Evaluations

Open repa1022 opened this issue 6 years ago • 11 comments

Hi

can someone help me understanding the different evaluations? There is one "[email protected], 0.70, 0.70" and one "[email protected], 0.50, 0.50" ans also a "Car coco..." And which of the contents (bbox, bev, 3d, aos) describe the Detection Precision? I can not find anything about that and the values in KITTI Benchmark on their website are different.

Thank you!

repa1022 avatar Jan 28 '19 09:01 repa1022

[email protected], 0.70, 0.70 means evaluate car performance in easy, moderate, hard and use 0.7 (easy), 0.7 (mod), 0.7 (hard) as overlap threshold. bbox means calculate overlap (intersection of union area) by bbox overlap, bev means bev overlap, 3d means 3d overlap. the aos... you can ignore this.

traveller59 avatar Jan 29 '19 15:01 traveller59

ok thank you! So [email protected], 0.50, 0.50 is 0.70 (easy), 0.50 (moderate) and 0.50 (hard) as IoU? I am a little bit confused as the mAPs of Car with 0.70 (easy) in the different evaluation types ([email protected],0.50,0.50 and [email protected],0.70,0.70) aren’t the same

repa1022 avatar Jan 29 '19 17:01 repa1022

Hello @traveller59 and @repa1022 , I have the same problem, The evaluation result in [email protected],0.7,0.7 and [email protected],0.5,0.5 are totally different, i understand that for 0.5 is different from 0.7, but why even for 0.7(easy) is different. Many thanks.

pangsu0613 avatar Feb 06 '19 20:02 pangsu0613

Hello @traveller59 and @repa1022 , I have the same problem, The evaluation result in [email protected],0.7,0.7 and [email protected],0.5,0.5 are totally different, i understand that for 0.5 is different from 0.7, but why even for 0.7(easy) is different. Many thanks.

Have you understood why they are different? I have the same doubts, can you tell me how to understand this? Thank you

tyjiang1997 avatar Oct 07 '19 08:10 tyjiang1997

Hi @traveller59 and @repa1022, I know it's an old issue, but I was using this today and I got some results, but I'm not sure how to interpret them.

My first question is what do you mean exactly, when you mention easy, moderate and hard? Is this supposed to describe the size of the object or some kind of occlusion (or lack thereof) and how exactly is it defined?

I also had two results, namely the official evaluation and coco evaluation. What is the difference between them?

mnik17 avatar Sep 28 '21 21:09 mnik17

Hello @mnik17 , easy, moderate and hard are the difficulty levels defined by KITTI object detection benchmark, you can check the definition of them in the KITTI website and the devkit readme file. Basically it is related to the object's height in the image plane, the occlusion level and truncation levels. The official evaluation results are the results getting from KITTI evaluation script, COCO is another object detection benchmark, their evaluation metrics are different from KITTI. In my opinion, if you do work mainly on KITTI, you can ignore the COCO results.

pangsu0613 avatar Sep 29 '21 03:09 pangsu0613

Hello @pangsu0613, thank you for the fast reply, so basically if I have a 0.7 threshold for an easy object it will not consider any detections with less overlap than that for this type of objects?

mnik17 avatar Sep 29 '21 08:09 mnik17

@mnik17, 0.7 is the IoU threshold for car class in KITTI dataset (0.5 is for pedestrian and cyclist). If the IoU between a ground truth and a predicted object is larger than 0.7, it will be treated as a true positive, if it is smaller than 0.7, it will be treated as a false positive.

pangsu0613 avatar Sep 29 '21 15:09 pangsu0613

Ok, thank you very much @pangsu0613

mnik17 avatar Sep 29 '21 22:09 mnik17

hello @pangsu0613 I have the same problem, The evaluation result in [email protected],0.7,0.7 and [email protected],0.5,0.5 are totally different, i understand that for 0.5 is different from 0.7, but why even for 0.7(easy) is different. Many thanks. Now do you understand? Can you help me

wuyuyu324 avatar Dec 16 '21 02:12 wuyuyu324

Hi @pangsu0613 , my understanding is that the first 0.7 in [email protected],0.5,0.5 is a typo, it should be all 0.5. basically, for [email protected],0.7,0.7, it shows all the easy, moderate and hard results with IoU thresholding at 0.7, and for [email protected],0.5,0.5, it shows all the easy, moderate and hard results with IoU thresholding at 0.5, because 0.7 is larger than 0.5, which means 0.7 is a more strict (harsh) criteria, so the numbers under 0.7 are smaller than the numbers under 0.5 (of course, comparison must be done under ths same difficulty level, 0.7 easy vs 0.5 easy, 0.7 moderate vs 0.5 moderate, etc).

pangsu0613 avatar Dec 16 '21 15:12 pangsu0613