Results 30 comments of Michael Shieh

Thanks! This will cause the same data example to be written twice to the tfrecord, and hence results in a larger tfrecord file of repetitive examples. We'll fix it.

The text part of the code should be compatible with both python 2 and python 3, since BERT works for both. For the vision part, we used python 2 since...

Hi, you can directly use the current code for other datasets and we used similar hyperparameters for them. You can get the supervised data for other datasets from [here](http://bit.ly/2kRWoof). The...

Yes, we used all unlabeled reviews. Sorry for the late reply!

It's set to 0.7, as mentioned in Appendix C of our paper.

Hi, we set uda_confidence_thresh to 0.8 and uda_softmax_temp to 0.9 for that case. Thanks!

Hi, we used uda_softmax_temp and uda_confidence_thresh only for 250 labels. When the size of the labeled set is 500, 1000, 2000, we simply use entropy minimization with coefficient 0.1. For...

We simply used the [open-sourced policies](https://github.com/tensorflow/models/blob/master/research/autoaugment/policies.py#L21) of AutoAugment. Using fewer sub-policies also work. With 10 sub-policies, the error rate is 5.28 +- 0.17 similar to the reported error rate of...

Your work is indeed related to ours though your work aims to learn representations. We'll add a citation in the future version.

Thanks! We don't plan to release the ImageNet code because it requires a lot of code cleaning. The hyperparameters for ImageNet experiments are available [here](https://github.com/google-research/uda/issues/5). The augmentation policy is available...