MAE-pytorch icon indicating copy to clipboard operation
MAE-pytorch copied to clipboard

Bad transfer learning result while fine tuning in iNaturalist 2019 which is not IN1K

Open cssddnnc9527 opened this issue 2 years ago • 7 comments

Dear Author,

Firstly thanks and appreciated for your great contribution.

While fine tuning with IN1K base on the pre-train model which also trained with IN1K, the result is similar to the paper's as follows: Screenshot from 2021-12-30 14-13-49

But if I fine tune with iNaturalist with the same pre-train model and same finetune parameters listed in your github page, the result is really bad as follows: Screenshot from 2021-12-30 14-14-16

So, what do you think of the possible reason for me? Looking forward to your reply, Thanks in advance !

BTW, the picture amount of iNaturalist is about 260,000, icludes 1010 classes. The train data and val data is not separated in iNaturalist, I divided follow the ratio of IN1K(96% for train, 4% for val).

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ? If yes, how much longer we need wait for ? Thanks again !

cssddnnc9527 avatar Dec 30 '21 06:12 cssddnnc9527

亲爱的作者,

首先感谢并感谢您的巨大贡献。

虽然在也使用IN1K训练的预训练模型的基础上使用IN1K进行微调,但结果与本文的结果类似:如下: Screenshot from 2021-12-30 14-13-49

但是,如果我使用github页面中列出的相同预训练模型和相同的微调参数对iNaturalist进行微调,结果非常糟糕,如下所示: Screenshot from 2021-12-30 14-14-16

那么,你认为我的可能原因是什么?期待您的回复,提前致谢!

顺便说一句,iNaturalist的图片数量约为260,000,包括1010类。在iNaturalist中,列车数据和val数据没有分开,我按照IN1K的比率划分(火车为96%,val为4%)。

此外,您是否计划实现对象检测和语义分割的微调代码?如果是,我们需要等待多长时间?再次感谢!

你好 请问您是怎样在微调过程载入作者的预训练模型的呢?可否提供下代码,感谢

zsddd avatar Jan 02 '22 09:01 zsddd

@cssddnnc9527 Thanks for your kind words!

Are the pre-training weights loaded correctly?

pengzhiliang avatar Jan 04 '22 02:01 pengzhiliang

@zsddd --finetune "/path/to/model_weight"

pengzhiliang avatar Jan 04 '22 02:01 pengzhiliang

@pengzhiliang

Thanks for your reply!

I found the root cause. It's data split problem. I split the data roughly, not split into the data for every class.

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ? If yes, how much longer we need wait for ? Thanks again !

cssddnnc9527 avatar Jan 04 '22 03:01 cssddnnc9527

Hello~ I am sorry that transfer it to downstream task like semantic segemantation is not on my schedule now.

But it is not a hard job, please refer to semantic_segmentation in beit

pengzhiliang avatar Jan 04 '22 05:01 pengzhiliang

Hello~ I am sorry that transfer it to downstream task like semantic segemantation is not on my schedule now.

But it is not a hard job, please refer to semantic_segmentation in beit

would you mind giving a short tutorial? I am not familiar with mmsegmentaion lib, it is confused to me. I'm sorry to take up your time, but if you give us a short tutorial, I'll appreciate it. Thanks again!~

insomniaaac avatar Jan 18 '22 07:01 insomniaaac

@pengzhiliang

Thanks for your reply!

I found the root cause. It's data split problem. I split the data roughly, not split into the data for every class.

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ? If yes, how much longer we need wait for ? Thanks again !

@pengzhiliang

Thanks for your reply!

I found the root cause. It's data split problem. I split the data roughly, not split into the data for every class.

In addition, do you have the plan to implement fine tuning code of object detection and semantic segmentation ? If yes, how much longer we need wait for ? Thanks again !

Hey, in case you've managed to fine-tune on iNaturalist, would you mind sharing the weights? Thanks in advance

idansc avatar Oct 27 '22 10:10 idansc