Ze Liu

Results 32 comments of Ze Liu

Hi, you can just delete the "attn_mask" in the pre-trained model. Besides, we have also provided a function for your purpose: https://github.com/microsoft/Swin-Transformer/blob/5d2aede42b4b12cb0e7a2448b58820aeda604426/utils.py#L46

Hi @FreddieRao, thanks for pointing it. You can fix it by using `SequentialSampler`: https://github.com/microsoft/Swin-Transformer/blob/b05e6214a37d33846903585c9e83b694ef411587/data/build.py#L56-L61

Instructions and configs for fine-tuning on higher resolution can be found here: https://github.com/microsoft/Swin-Transformer/blob/main/get_started.md#fine-tuning-on-higher-resolution

Hi @Dongshengjiang and @tb2-sy , you can find instructions and configs here: https://github.com/microsoft/Swin-Transformer/blob/main/get_started.md#fine-tuning-on-higher-resolution

Hi @zbwxp, the training configs are https://github.com/microsoft/Swin-Transformer/blob/main/configs/swin_base_patch4_window7_224_22k.yaml and https://github.com/microsoft/Swin-Transformer/blob/main/configs/swin_large_patch4_window7_224_22k.yaml

Hi @tiancity-NJU, you can find instructions and configs here: https://github.com/microsoft/Swin-Transformer/blob/main/get_started.md#fine-tuning-from-a-imagenet-22k21k-pre-trained-model

Hi @tb2-sy, it seems that the **len(data_loader_train)** is 0. You can first check if the training data path is correct. If you just want to eval, you can hack it...

Hi @cuge1995 , We first calculate the L2-norm of the activation map and then use the Open3D library (https://github.com/intel-isl/Open3D) for visualization, the color map we use is jet.

Hi @TXH-mercury Thanks for your interest in our work. The performance drop may due to the different reproduction details in `PyTorch`, like some hyper parameters and the masks in downsampling...

Hi all, we have released the pytorch models of S3DIS. Please let me you know if you have other questions.