coco-text
coco-text copied to clipboard
Average Precision in evaluation script?
Seems like Coco-text - ICDAR17 is using VOC style AP as an evaluation metric, so curious why is it not supported in the evaluation API?
I see there is an offline evaluation script provided in the competition website in the "My methods page". Here is the snippet for AP calculation, comments are mine:
for n in range(len(confList)): #Num predictions
match = matchList[n]
if match:
correct += 1
AP += float(correct)/(n + 1) #rel(n) missing?
if numGtCare>0:
AP /= numGtCare
Is there a rel(n) term missing ? Also, from competition page it seems like evaluation is based on VOC style AP. In that case, should'nt the script use interpolated Precision for intervals of confidence?