metric-learning-divide-and-conquer
metric-learning-divide-and-conquer copied to clipboard
is the algorithm of calculating recall@k metrics correct?
https://github.com/CompVis/metric-learning-divide-and-conquer/blob/1766c2cffe1075692657898d2086af4bc9d92929/lib/evaluation/recall.py#L9
Hi, is the code above, which calculates recal@k metrics correct? it looks like top-k accuracy to me (when we add 1 to result sum, if we find at least one image in retrieval set which is from the same class as query image). And Recall@k is =(# of recommended items @k that are relevant) / (total # of relevant items), like in this article: https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54
Thanks, Jason
Hi,
I am wondering that the recall calculation seems to be incorrect as it only look for at-least one correct retrieval. The correct way to calculate recall can be found at the link below : https://github.com/littleredxh/DREML/blob/master/_code/Utils.py
Please see this part of a code:
for r in rank: A = 0 for i in range(r): imgPre = imgLab[idx[:,i]] A += (imgPre==imgLab).float() acc_list.append((torch.sum((A>0).float())/N).item())
So, we should compare predicted (imgPre) and True label(imgLab) for the retrieved images and divide it by total number of images (N) to calculate recall.