pytorch-metric-learning
pytorch-metric-learning copied to clipboard
NTXentLoss, normalize issue.
Do we have to normalize the feature vector by ourself before sending these feature vectors into NTXentLoss. Because I checked the code in metric-learning, it seems that you don't include the normalize step.
Thanks in advance!
JJ
ah, I found you did the normalization step in calculating the similarity matrix.
another question is, assume I have a dataset, feature vector shape is [6, 100], label vector is [0,1,0,3,3,1]. Is it similar to supervised contrasitive learning if I use this NTXentloss in this case?
Thanks!
NTXentLoss() compares embeddings to a reference embedding, with associated labels. So if the labels for both the embeddings and reference are known, you can pass those into the loss function and they will be treated like supervised contrastive learning. In your example, feature embeddings that share the same label as a reference embedding would form positive pairs. Your call to the loss function would look something like loss = loss_func(embeddings1, labels, ref_emb=embeddings2, ref_label=labels).
Check out the blue drop-down box in the documentation for NTXentLoss for more information. Issue #6 or this comment are also a good places to look for more details.
Thanks @stompsjo!
Also if you're looking specifically for Supervised Contrastive Learning there is a loss function for that: https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss