ViTAE-Transformer
ViTAE-Transformer copied to clipboard
The official repo for [NeurIPS'21] "ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias" and [IJCV'22] "ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image...
How do I configure VITAE-B?
Thank you for the release of ViTDet . Im trying to train ViTAE-Small on COCO along with https://github.com/ViTAE-Transformer/ViTDet. I downloaded the pretrained model "ViTAE-S.pth.tar" from https://github.com/ViTAE-Transformer/ViTAE-Transformer/tree/main/Image-Classification. And, I tried to...
Great work! In Fig. 5, the maximum average distance is similar to that in the original ViT, but there is a window partition in Stage3, that is, the corresponding original...
Hi, I think your research is interesting and grateful for providing the code. I am grateful to the authors of the paper. I saw the following visualization results in the...
 When I use mmcv1.4.1,error :AssertionError: MMCV==1.4.1 is used but incompatible. Please install mmcv>=2.0.0rc4. When Iuse mmcv2.0, error:ImportError: cannot import name 'revert_sync_batchnorm' from 'mmcv.cnn.utils' (/home/tian/miniconda3/envs/mmseg/lib/python3.9/site-packages/mmcv/cnn/utils/__init__.py). Hope can update to OpenMMLab...
when to release the weights of vitae large and vitae huge?
# Welcome update to OpenMMLab 2.0 I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial...
I am trying to use the pretrained model ViTAEv2-B with the command: `python -m torch.distributed.launch ./main.py Database/BIRDS/ --model ViTAEv2_B --pretrained --initial-checkpoint checkpoint/ViTAEv2-B.pth.tar -b 32 --lr 5e-4 --weight-decay .065 --img-size 224...
I have been studying VITAE recently and have not found a 13M model. I am currently using this model to train downstream tasks. Would it be convenient for me to...