open-metric-learning
open-metric-learning copied to clipboard
Metric value may slightly randomly change
We observed that the metric value might slightly change depending on the list of k
for which we want to calculate our metric. Thus, if we calculate CMC for k_vals=(1,10)
and k_vals=(1,20)
the CMC@1 may be slightly different.
Most likely the reason for instability is in torch.topk
and in a way how we calculate the metric:
https://github.com/OML-Team/open-metric-learning/blob/c6004e4d2f43de43ca5c480cc69dbbef06599e69/oml/functional/metrics.py#L89
Let's consider an example and calculate precision@2
:
is_gt = [1, 1, 0, 1]
dist = [1, 2, 2, 3]
Two elements have the same distance to the query, so, if top_k
picks the first of them, than precision@2 = 1
, otherwise precision@2 = 1/2
.
@dapladoc in case you are interested ^
One of the possible solutions is to allow elements with the same distance to the query to share the same k-th
place. This approach is used in Hyp-ViT: https://github.com/htdt/hyp_metric/blob/be2b829b21c279ab874f113c648c0296be89134d/helpers.py#L70
Hi, I'm interested in this issue.
Closed because of: https://github.com/OML-Team/open-metric-learning/pull/382#issuecomment-2155060075