LearningToCompare_ZSL
LearningToCompare_ZSL copied to clipboard
why do you rebuild the labels during the training part?
In the training part, you rebuild the labels, why don't you use all the raw labels?
@Jason-WT I think rebuild the labels is just for generate the one hot label 'one_hot_labels = Variable(torch.zeros(BATCH_SIZE, class_num).scatter_(1, re_batch_labels.view(-1,1), 1)).cuda()' because I try to use the original label to generate the one hot label and failed, rebuild the label do not effect the training, the RN just need to know whether the CNN feature and the attritube feature belong to one class or not.
@Jason-WT I think rebuild the labels is just for generate the one hot label 'one_hot_labels = Variable(torch.zeros(BATCH_SIZE, class_num).scatter_(1, re_batch_labels.view(-1,1), 1)).cuda()' because I try to use the original label to generate the one hot label and failed, rebuild the label do not effect the training, the RN just need to know whether the CNN feature and the attritube feature belong to one class or not.
Thank you a lot. I got it!
Hi, I find that the code on aPY and SUN dataset does not work well. Could you please explain this phenomenon?
@Jason-WT I think rebuild the labels is just for generate the one hot label 'one_hot_labels = Variable(torch.zeros(BATCH_SIZE, class_num).scatter_(1, re_batch_labels.view(-1,1), 1)).cuda()' because I try to use the original label to generate the one hot label and failed, rebuild the label do not effect the training, the RN just need to know whether the CNN feature and the attritube feature belong to one class or not.
why use original label would fail?
the original label space is not continue. for example, ten class, the label is (1,2,4,6,9,11,22,34,55,90). not (0,1,2,3,4,5,6,7,8,9) . but its doesn't matter, use rebuild label would be fine.
Hi, I find that the code on aPY and SUN dataset does not work well. Could you please explain this phenomenon?
Now do you know why?