why are sp and sn both decreasing during training process?
hello, thanks for your great work! I got a question. According to my understanding, the goal of circleloss is increasing s_p to 1 and decreasing s_n to 0 . and the final decision boundary is s_n - s_p + m = 0. but when i try the mnist example, i find that s_p and s_n in one batch are both decreasing. the final value are both approx 0.41. in this case, the cosine similarity of positive pairs are close to negative pairs. why would it happen? Or in another way, how can they classify the sample pairs
Thanks. I find this as well. MNIST example is my simple test case. Maybe you can try some other experiments.
Thanks. I find this as well. MNIST example is my simple test case. Maybe you can try some other experiments.
Thanks for your reply!! i also tried it on reid and face recognition tasks. it also performed like this. BUT when i use triplet loss to pretrain model first and then use circle loss train the whole net, the sp is way bigger than sn. sp is approx 0.78 while sn is approx 0.2 which looks more reasonable. does it mean the circle loss is not stable enough or too sensitive to the batch size?
It seems that how to select samples for a batch matters a lot.