Han Hu
Han Hu
Probably you can plot the attention maps within the neighboring window for each query point
> Thanks for your great repo. I would like to ask when the swin-v2 pre-trained model will be released? The swin-v2 models have been released
> the swin transformer can't deal with segmentation works! the output have one color can you help me resolve this trouble? Please try this codebase: https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation
> 我只使用了swin transformer的模型架构,win size是4,输入大小是32,其他都没变,也没有使用任何数据增强,resnet可以达到80%的acc,但是swin trans只有52%,请问,这样的结果,是什么引起的? You may need to first design the full architecture, and change also many other hyper-parameters accordingly
> Swin-MLP code is right for Swin-Mixer
We use bi-cubic interpolation on relative_position_bias_table to deal with larger window size. Will provide related code soon.
Swin V2 has a better approach dealing with different window sizes.
[help]How to export swin model to ONNX? Problem: Node (Concat_246) Op (Concat) [ShapeInferenceError]
Thanks @xizi
> Dear authors, > > Sorry for repeating [this issue](https://github.com/microsoft/Swin-Transformer/issues/55). It will be great if you could release an ImageNet-22K pretrained Swin-T model (if possible). I believe this will greatly...
> I train the `swin_tiny_patch4_window7_224` with one million classes and 100 million images with softmax loss and adamw, the batch size is 600 and train for 400,000 iterations but the...