EVA
EVA copied to clipboard
EVA Series: Visual Representation Fantasies from BAAI
Hi, I found the EVA-2 adopts both learnable positional embedding and rotary embedding at the same time. Why do you choose such design? Thanks.
Hello!Here import accimage import display can not be resolved, how to solve this problem ,I hope to hear from you.Thanks. # TODO: specify the return type def accimage_loader(path: str) ->...
Sorry to bother you. I met a mistake during fine-tuning on EVA-01. RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates...
(eva) root@nexus-nyz:~/EVA/EVA-01/eva# bash eva.sh /root/miniconda3/envs/eva/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your...
Thanks for the impressive work. By the way, where can I find the script to test retrieval zero-shot performance?
Hello! What is the difference between EVA-01 and EVA-02? Best wishes, Egor
Hi, thanks for the good EVA-clip. I'm loading&training the EVAVisionTransformer in EVA02-CLIP-L-14 and find that setting the patch_dropout to 0.5 raise runtimeError The size of tensor a (128) must match...
hello,thanks for your works, but when i run this work in deepspeed, the GPU memory can not reduce, thank very much if you can tell me why.
Every time I try to run _pip install -r requirements.txt_, where would be a mistake _AssertionError: Unable to pre-compile sparse_attn_, even when I download and install the triton compiled windows...
I find that the eva-clip model has an extra inner_attn_ln layer compared to the original pretrained model. [EVA-CLIP](https://github.com/baaivision/EVA/blob/master/EVA-CLIP/rei/eva_clip/eva_vit_model.py#L164) [EVA](https://github.com/baaivision/EVA/blob/master/EVA-02/asuka/modeling_finetune.py#L188) Any reason to use another layer-norm in the clip model?