deep_metric_learning
deep_metric_learning copied to clipboard
In def angular_mc_loss(f, f_p, alpha=45, in_degree=True):
term1 = 4 * sq_tan_alpha + matmul(f + f_p, transpose(f_p)) is not term1 = 4 * sq_tan_alpha * matmul(f + f_p, transpose(f_p)) ? The paper's formula: fa;p;n = 4 tan2 α(xa + xp)T xn − 2(1 + tan2 α)xT a xp
Thanks for reporting the issue. It seems bug. I'll check it.
Thank you. With this fix, the accuracy has increased and the response to the hyperparameter alpha has been corrected as described in the paper ("30 < alpha <50" works well in my experiments).
In "main_angular_loss.py", alpha=UniformDistribution(low=4, high=15), Does that mean 4<alpha<15 ? whether I should set 30<alpha<50 ? Thank you very much.
Ah yes, you should set 30<alpha<50.
alpha = UniformDistribution(low=4, high=15)
should be fixed to alpha = UniformDistribution(low=30, high=50)
.
I use a different dataset, such as person dataset, the optimal alpha value range are not the same. Have you tried different datasets? Thank you very much.
No, currently I just tested on only Cars196 (refefred as "Stanford Car" in the Angular loss paper). The paper says in section 4.5:
We found that our method performs consistently well in all three dataset for 36◦ ≤ α ≤ 55◦.
Thank you very much. I will try again.
Can you provide the pseudo code for the angular loss so that I can try writing a code in tensorflow? It might be useful to the larger community. I am not familiar with Chainer so couldn't interpret the exact meaning of term2nd and later part of the implementation. Thanks in advance.
@ronekko Does this repo reproduce the numbers on Stanford Car in the paper?