pytorch-metric-learning icon indicating copy to clipboard operation
pytorch-metric-learning copied to clipboard

NTXentLoss, normalize issue.

Open jzhanghzau opened this issue 1 year ago • 3 comments

Do we have to normalize the feature vector by ourself before sending these feature vectors into NTXentLoss. Because I checked the code in metric-learning, it seems that you don't include the normalize step.

Thanks in advance!

JJ

jzhanghzau avatar May 05 '24 19:05 jzhanghzau

ah, I found you did the normalization step in calculating the similarity matrix.

another question is, assume I have a dataset, feature vector shape is [6, 100], label vector is [0,1,0,3,3,1]. Is it similar to supervised contrasitive learning if I use this NTXentloss in this case?

Thanks!

jzhanghzau avatar May 05 '24 19:05 jzhanghzau

NTXentLoss() compares embeddings to a reference embedding, with associated labels. So if the labels for both the embeddings and reference are known, you can pass those into the loss function and they will be treated like supervised contrastive learning. In your example, feature embeddings that share the same label as a reference embedding would form positive pairs. Your call to the loss function would look something like loss = loss_func(embeddings1, labels, ref_emb=embeddings2, ref_label=labels).

Check out the blue drop-down box in the documentation for NTXentLoss for more information. Issue #6 or this comment are also a good places to look for more details.

stompsjo avatar May 07 '24 12:05 stompsjo

Thanks @stompsjo!

Also if you're looking specifically for Supervised Contrastive Learning there is a loss function for that: https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss

KevinMusgrave avatar May 09 '24 11:05 KevinMusgrave