pmf_cvpr22 icon indicating copy to clipboard operation
pmf_cvpr22 copied to clipboard

The effect of fine-tuning without domain gap

Open rachel-lyq opened this issue 1 year ago • 1 comments

Your outstanding performance has surprised us, and I have tried to apply the method in my work. In a few shot classification experiment on a single dataset MiniImageNet (without cross-domain data from multiple datasets), fine-tuning made a very small difference, even failing to bring a 0.1% improvement. Was the fine-tuning effect significant in your experiments? What do you think might be the cause of this problem?

rachel-lyq avatar Jul 19 '23 03:07 rachel-lyq

I guess fine-tuning work better when the domain gap is not too small. On MiniImageNet, the pre-trained DINO just work amazing as this is now well understood that foundation model solves many classification problems. While scaling up foundation model solves more and more problems, certain domains (e.g. 3D vision) lack good foundation model where meta-learning and fine-tuning are still of good practice.

hushell avatar Jun 05 '24 21:06 hushell