faster_coco_eval icon indicating copy to clipboard operation
faster_coco_eval copied to clipboard

add equalScore param

Open MiXaiLL76 opened this issue 1 year ago • 0 comments

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Usage

mAP is wrong if all scores are equal (=not providing a score) eval.params.equalScore = True

coco = COCO("gt.json")

pred = coco.loadRes("pred1.json")
eval = COCOeval(coco, pred, 'bbox')
eval.params.equalScore = True
eval.evaluate()
eval.accumulate()
eval.summarize()

pred = coco.loadRes("pred2.json")
eval = COCOeval(coco, pred, 'bbox')
eval.params.equalScore = True
eval.evaluate()
eval.accumulate()
eval.summarize()
Evaluate annotation type *bbox*
COCOeval_opt.evaluate() finished...
DONE (t=0.00s).
Accumulating evaluation results...
COCOeval_opt.accumulate() finished...
DONE (t=0.00s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.333
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.667
Loading and preparing results...
DONE (t=0.00s)
Evaluate annotation type *bbox*
COCOeval_opt.evaluate() finished...
DONE (t=0.00s).
Accumulating evaluation results...
COCOeval_opt.accumulate() finished...
DONE (t=0.00s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.663
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.333
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.667

MiXaiLL76 avatar Oct 31 '24 20:10 MiXaiLL76