meta-transfer-learning
meta-transfer-learning copied to clipboard
Question in meta phase: Why Generate the labels of support and query ?
thanks for your very enlightening work. However I found a spot that quiet confusing to me in your code, I describe it as follows: in the pytorch version, the MetaTrainer, line 115-120, 132-138, 182-187 , you generate the labels for support and query, so the class index is in order. But in sampling, the index of the class is scrambled. why not use the actual class index?
Thanks for your interest in our work.
We will not shuffle the order of classes in one task during task sampling. During base learning, we update the model with all the samples in one batch, so we don't need to shuffle the samples.
thank you for your quick reply. but I still confused about it. in CategoriesSampler, the class is random select in each task ,such as [43,5,63,19,58], but when generate the labels, it will becom [0,1,2,3,4]. the class index is not equal to the real class index.
here I print the generate label of train shot and query, and also print the real class of select . though the order is not shuffled, but the class index is not the same. if this will affect the accuracy?maybe I have some error in understanding the code.
During meta-learning, we're doing a 5-class classification task. So the labels for the classes range from 0 to 4.
As we use F.cross_entropy
to compute the loss, we cannot set the ground truth for a 5-dim vector as [43,5,63,19,58]
.
https://github.com/yaoyao-liu/meta-transfer-learning/blob/d4ab548fe4258bab8e1549c8e1c9be175d52afe1/pytorch/trainer/meta.py#L153
You may check the PyTorch source code here for the details of F.cross_entropy
.
thank you, I got it