Learning-Loss-for-Active-Learning icon indicating copy to clipboard operation
Learning-Loss-for-Active-Learning copied to clipboard

question about your Reproduced Results

Open Bardielz opened this issue 3 years ago • 3 comments

hi,i have a question about how the four parts set: radom part: it is just radomly samlping from the traing set and train? reference : is it not the learn loss? ground truth loss: is it diffr=erent from random part? learn loss: it is the paper using ?

Bardielz avatar Dec 02 '20 07:12 Bardielz

Hi, As you know, The LL4AL method uses the loss learning module to select the next set of data points to be labeled at every cycle. In this sense, Random: Select random data points in the remaining unlabeled set. Then label them. Learn loss: This is the result of our reproduction. Reference: It is just the reported results from the paper. Thus, our reproduced model should reach this accuracy. Ground truth loss: This is our unique experiment, so it does not appear in the original authors' paper. It does not use the loss learning module to predict the loss of the unlabeled data. It actually calculates the loss from the ground-truth label of the unlabeled data. This is possible in CIFAR10 since all images have labels. This might be confused. Note that (1) the CIFAR10 dataset, in fact, has labels and authors intentionally remove labels, and (2) therefore, the authors 'predict' loss, not 'calculate' it.

Mephisto405 avatar Dec 15 '20 08:12 Mephisto405

Hi @Mephisto405 Isn't it weird that the query strategy by ground truth losses is performing so poorly? Theoretically. this strategy should be as good as ll4al or better. Do you have any intuitions why this is happening?

Reasat avatar Apr 19 '21 18:04 Reasat

@Mephisto405

Hi @Mephisto405 Isn't it weird that the query strategy by ground truth losses is performing so poorly? Theoretically. this strategy should be as good as ll4al or better. Do you have any intuitions why this is happening?

I also have the same question, why ground truth loss is performing poorly?

manza-ari avatar Sep 03 '21 08:09 manza-ari