Parametric-Contrastive-Learning icon indicating copy to clipboard operation
Parametric-Contrastive-Learning copied to clipboard

Why are the logits on the numerator in the loss function not masked for comparing a sample with itself?

Open xiaobingbuhuitou opened this issue 1 year ago • 1 comments

dalao, I find that in PaCo or GPaCo the logits on the numerator in the loss function not masked, but the denominator of the loss function is masked cause exp_logits = torch.exp(logits) * logits_mask i think the the logits on the numerator also should masked? and the learnable center is used for predict ground truth to make it become a supervised question? thanks😢

xiaobingbuhuitou avatar Nov 15 '23 12:11 xiaobingbuhuitou

sorry and i want to ask in paper the Remark 2 after use parametric contrastive learning why the probility become alpha/(1+alphaKy) and C become 1/(1+alphaKy) ? sorry i don't know how to compute it thaks 😢

xiaobingbuhuitou avatar Nov 15 '23 12:11 xiaobingbuhuitou