Adversarial-Transformation-Network icon indicating copy to clipboard operation
Adversarial-Transformation-Network copied to clipboard

Problem in re-ranking

Open avideep opened this issue 5 years ago • 4 comments

Your re-ranking function, where you are dividing the re-ranked vector by its norm, does not sum up to 1. Hence, it is not a valid probability distribution. Any idea of how to solve that?

avideep avatar Sep 09 '19 06:09 avideep

@avideep Sorry for the late reply. This is an implementation about 2 years ago and I have forgotten many of the details. But for your question, I think the simplest way is doing another softmax after re-ranking.

RanTaimu avatar Sep 19 '19 15:09 RanTaimu

Thank you so much for the reply. The idea of applying a softmax again came across my mind. But I was worried whether it's a good idea when the loss is computed the original y vector which is generated by only one softmax operation. Any thoughts on that?

avideep avatar Sep 19 '19 16:09 avideep

@avideep I think it probably doesn't matter. Reranking is an operation that forcibly changes the distribution of the predicting result, and the second softmax also dose the same thing. I think a problem that should be paid more attention to is softmax may smoothing the reranked distribution, which means the peak value of the target may lose its power to change the network's decision. In other words, whatever method you use, you should keep the peak of the modified distribution as sharp as possible.

RanTaimu avatar Sep 20 '19 13:09 RanTaimu

can you help on how you apply softmax on it

kunal266 avatar Apr 03 '23 06:04 kunal266