Transfer-Learning-Library icon indicating copy to clipboard operation
Transfer-Learning-Library copied to clipboard

Cannot repeat the result Fine-tune the unsupervised pre-trained model in task_adaptation/image_classification

Open ReloJeffrey opened this issue 2 years ago • 4 comments

For fine-tune the unsupervised pre-trained model, Only the erm reult can be repeat, the co_tuning、bi_tuning accuray is lower than erm

ReloJeffrey avatar Aug 07 '22 03:08 ReloJeffrey

Can you give a more detailed description, such as the experimental dataset and the proportion of labeled data. And which version of pytorch are you using? It’s suggested to use pytorch==1.7.1 and torchvision==0.8.2 in order to reproduce the benchmark results.

thucbx99 avatar Aug 07 '22 13:08 thucbx99

Can you give a more detailed description, such as the experimental dataset and the proportion of labeled data. And which version of pytorch are you using? It’s suggested to use pytorch==1.7.1 and torchvision==0.8.2 in order to reproduce the benchmark results.

I have try to run erm、co_tuning in Fine-tune the supervised pre-trained model and MoCo (Unsupervised Pretraining) for CUB-200-2011、Standford Cars、Aircrafts. I just run the .sh provided and I found all the results in the Fine-tune the supervised pre-trained model can be repeated. But for MoCo (Unsupervised Pretraining), only erm can be repeated.The results for co_tuning is lower than erm in all three dataset and all the proportion of labeled data. I have use torch==1.7.0 with torchvision==0.8.0 and pytorch==1.7.1 with torchvision==0.8.2 both.

ReloJeffrey avatar Aug 08 '22 13:08 ReloJeffrey

Thanks for providing the details. I'm running these experiments again.

thucbx99 avatar Aug 09 '22 12:08 thucbx99

I found out that this is because an important command line option --finetune is missing from the provided script. By specifying --finetune we apply a 0.1x learning rate to the backbone and are able to reproduce the results.

thucbx99 avatar Aug 09 '22 14:08 thucbx99