mean_average_precision icon indicating copy to clipboard operation
mean_average_precision copied to clipboard

AP value in multiclass object detection.

Open beratersari opened this issue 1 year ago • 0 comments

import numpy as np
from mean_average_precision import MetricBuilder
import warnings
warnings.filterwarnings("ignore")

# [xmin, ymin, xmax, ymax, class_id, difficult, crowd]
gt = np.array([
    [439, 157, 556, 241, 0, 0, 0]

])

# [xmin, ymin, xmax, ymax, class_id, confidence]
preds = np.array([
    [439, 157, 556, 241, 0, 0.460851]
])

# print list of available metrics
print(MetricBuilder.get_metrics_list())

# create metric_fn
metric_fn = MetricBuilder.build_evaluation_metric("map_2d", async_mode=False, num_classes=4)

for i in range(10):
    metric_fn.add(preds, gt)
print(metric_fn.value(iou_thresholds=0.5))
print(f"VOC PASCAL mAP: {metric_fn.value(iou_thresholds=0.5, recall_thresholds=np.arange(0., 1.1, 0.1))['mAP']}")
print(f"VOC PASCAL mAP in all points: {metric_fn.value(iou_thresholds=0.5)['mAP']}")
print(f"COCO mAP: {metric_fn.value(iou_thresholds=np.arange(0.5, 1.0, 0.05), recall_thresholds=np.arange(0., 1.01, 0.01), mpolicy='soft')['mAP']}")

I got 0.25 map value for that code. The reason of that is it gives zero AP value for classes 1,2,3 and it gives 1.0 ap value for class 0. The mean of that is 0.25. Is it sensible to give a 0 ap value for non exists classes in a ground-truth array? Could you help me?

beratersari avatar May 01 '23 07:05 beratersari