AdaptFormer
AdaptFormer copied to clipboard
How to add adapt-mlp to swin-transformer?
Thanks for sharing such great work! I have some problem about how to using adapt-mlp in swin? As we know, number channel is different in different stage of swin, so how should we set middle channel in this condition?
Thanks for your interest.
For Swin, we use bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.
Thanks for your interest.
For Swin, we use
bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.
Thank you for sharing. I want to know that the pre training weight of SWin is mainly the input of 224 or 384, but when I use SWin, the input size is 1024 or 1120. In this way, the pre training weight is frozen, and only the effect of Adapt-mlp is good?What is the input size when the author tries to apply as to SWin?
Thanks for your interest. For Swin, we use
bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.Thank you for sharing. I want to know that the pre training weight of SWin is mainly the input of 224 or 384, but when I use SWin, the input size is 1024 or 1120. In this way, the pre training weight is frozen, and only the effect of Adapt-mlp is good?What is the input size when the author tries to apply as to SWin?
Hi, @LUO77123
Thanks for your interest. I am sorry that I am not sure if I understand you correctly.
We use input size 224x224 for swin transformer. We did not experiment with other image sizes.
Thanks for your interest. For Swin, we use
bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.Thank you for sharing. I want to know that the pre training weight of SWin is mainly the input of 224 or 384, but when I use SWin, the input size is 1024 or 1120. In this way, the pre training weight is frozen, and only the effect of Adapt-mlp is good?What is the input size when the author tries to apply as to SWin?
Hi, @LUO77123
Thanks for your interest. I am sorry that I am not sure if I understand you correctly.
We use input size 224x224 for swin transformer. We did not experiment with other image sizes.
Hello, I mean to use SWin for the backbone network of target detection. The input image size is no longer 224x224 or 384x384 when the pre-training weight is used, but 1024x1024 or 1120x1120. At this time, freeze the pre training weight and only train the unfrozen layers in the middle of the adapt MLP. Is this good? 您好,我的意思是将SWin用于目标检测的骨干网络,输入的图像大小不再是预训练权重时候的224x224或者384x384,而是1024x1024或者1120x1120,这时候再冻结预训练权重,只训练Adapt-mlp中间未冻结的几层,这样的效果好吗?
For downstream tasks, please refer https://github.com/ShoufaChen/AdaptFormer/issues/1. We will update related results for downstream tasks after finishing experiments.
For downstream tasks, please refer #1. We will update related results for downstream tasks after finishing experiments.
thanks
For downstream tasks, please refer #1. We will update related results for downstream tasks after finishing experiments.
Hello, there is one last question. If you apply Adapt-MLP to Swin's detection network backbone, do you want to build a new dictionary to import the 384x384 Swin pre training weights according to the new network structure? At this time, freeze the pre training weights and only train the unfrozen layers in the middle of Adapt-MLP. Is this the way to do it?
您好,还有最后一个问题,如果将Adapt-MLP运用到SWin的检测网络骨干中,是否是将384x384的Swin预训练权重按照新的网络结构构建新的字典导入,这时候再冻结预训练权重,只训练Adapt-mlp中间未冻结的几层,是这样操作吗?
Yes, you are right.
Yes, you are right.
OK, thank you. I'll try the effect. Are you going to open source this downstream image processing method in mid or late June? 好的,谢谢您,我去尝试一下效果,您准备6月中旬还是下旬开源这种下游图像处理的这种方法吗?
Yes, you are right.
Could you tell me where the code for freezing weights is in the video processing code you implemented? I was careless and didn't look carefully. Can you give me some guidance on where to study. 还能否请问一下,您实现的视频处理代码中,冻结权重的代码在哪里呀,自己粗心没仔细看。能否指导一下在哪里,好好学习一下。
Here: https://github.com/ShoufaChen/AdaptFormer/blob/main/main_video.py#L340-L348
Here: https://github.com/ShoufaChen/AdaptFormer/blob/main/main_video.py#L340-L348
Thanks, I have modified the adjustment, but I don't know the three values (mid_dim=64, dropout=drop, S=0.1). mid_dim's experiment in the paper proves that it takes 64. Dropout is 0 by default. S is 0.1 or 0? Can you answer? 谢谢,我已经修改调通了,但是我不知道这三个值(mid_dim=64, dropout=drop, s=0.1),mid_dim在论文中的实验证明取的64,dropout我默认取0,S是取0.1还是0喃,能解答一下吗?
mid_dim
is 64 for ViT and dim // 12
for swin transformer. dropout is 0 and s is 0.1.
Thanks for your interest.
For Swin, we use
bottleneck=dim // 12
, which can bring similar amount of parameters compared with plain ViT.
Hi, Where to set "bottleneck=dim//12"? thanks in advance!