Transfer-Learning-Library
Transfer-Learning-Library copied to clipboard
Cannot repeat the result Fine-tune the unsupervised pre-trained model in task_adaptation/image_classification
For fine-tune the unsupervised pre-trained model, Only the erm reult can be repeat, the co_tuning、bi_tuning accuray is lower than erm
Can you give a more detailed description, such as the experimental dataset and the proportion of labeled data. And which version of pytorch are you using? It’s suggested to use pytorch==1.7.1 and torchvision==0.8.2 in order to reproduce the benchmark results.
Can you give a more detailed description, such as the experimental dataset and the proportion of labeled data. And which version of pytorch are you using? It’s suggested to use pytorch==1.7.1 and torchvision==0.8.2 in order to reproduce the benchmark results.
I have try to run erm、co_tuning in Fine-tune the supervised pre-trained model and MoCo (Unsupervised Pretraining) for CUB-200-2011、Standford Cars、Aircrafts. I just run the .sh provided and I found all the results in the Fine-tune the supervised pre-trained model can be repeated. But for MoCo (Unsupervised Pretraining), only erm can be repeated.The results for co_tuning is lower than erm in all three dataset and all the proportion of labeled data. I have use torch==1.7.0 with torchvision==0.8.0 and pytorch==1.7.1 with torchvision==0.8.2 both.
Thanks for providing the details. I'm running these experiments again.
I found out that this is because an important command line option --finetune
is missing from the provided script. By specifying --finetune
we apply a 0.1x
learning rate to the backbone and are able to reproduce the results.