pmf_cvpr22
pmf_cvpr22 copied to clipboard
The effect of fine-tuning without domain gap
Your outstanding performance has surprised us, and I have tried to apply the method in my work. In a few shot classification experiment on a single dataset MiniImageNet (without cross-domain data from multiple datasets), fine-tuning made a very small difference, even failing to bring a 0.1% improvement. Was the fine-tuning effect significant in your experiments? What do you think might be the cause of this problem?
I guess fine-tuning work better when the domain gap is not too small. On MiniImageNet, the pre-trained DINO just work amazing as this is now well understood that foundation model solves many classification problems. While scaling up foundation model solves more and more problems, certain domains (e.g. 3D vision) lack good foundation model where meta-learning and fine-tuning are still of good practice.