YUI
YUI
It is a common operation when using gradient reversal. See Eq. 14 in the following paper. https://arxiv.org/pdf/1409.7495.pdf
Hi. For each dataset, we used the validation set to tune the hyperparameters. But we used the same hyperparameters when only the number of labeled data is changed.
The loss function is "Loss_cel + Loss_abv" but in the code of the model, a [gradient reversal layer](https://github.com/YU1ut/openset-DA/blob/master/models.py#L125-L126) is used to make it "Loss_cel - Loss_abv". I am still trying...
Hi. I also find that the accuracy for sample 0 and sample 4 is low. But l also don't know how to solve it. Maybe it is the limitation of...
I think augmentations like affine augmentation used in [this repo](https://github.com/Britefury/self-ensemble-visual-domain-adapt) are helpful. What kinds of tricks have you tried?