coco-text icon indicating copy to clipboard operation
coco-text copied to clipboard

Average Precision in evaluation script?

Open sravya8 opened this issue 7 years ago • 1 comments

Seems like Coco-text - ICDAR17 is using VOC style AP as an evaluation metric, so curious why is it not supported in the evaluation API?

sravya8 avatar Oct 06 '17 12:10 sravya8

I see there is an offline evaluation script provided in the competition website in the "My methods page". Here is the snippet for AP calculation, comments are mine:

for n in range(len(confList)): #Num predictions
                match = matchList[n]
                if match:
                    correct += 1
                    AP += float(correct)/(n + 1) #rel(n) missing?
            if numGtCare>0:
                AP /= numGtCare

Is there a rel(n) term missing ? Also, from competition page it seems like evaluation is based on VOC style AP. In that case, should'nt the script use interpolated Precision for intervals of confidence?

sravya8 avatar Oct 06 '17 13:10 sravya8