Zhe Chen
Zhe Chen
Thanks for your feedback. I think that you can try fp16 inference.
Hi, you can check nvidia-smi for double confirmation. If it is consistent with or close to the memory cost shown in the log, then it is normal.
Apologies for the delay. I haven't tested BEiT_Adapter_base on ADE20K before, but I've just set up the experiment. I'll inform you as soon as the results are available. Thank you...
This is my result of BEiT-Adapter-Base + Mask2Former. Note that the dimension of Mask2Former used in this experiment is 256. Could you provide your config of BEiT_mask2former_base for further alignment?...
Thanks for your message. I'm sorry that, due to my current busy schedule, I don't have immediate plans to upgrade to MMCV v2. It seems like it might not be...
Hi, thank you for your feedback. However, your PR changes are a bit big, you made changes to 2,015 files, and uploaded some images (it seems to be some images...
您好,因为检测任务用窗口注意力对性能的影响不大,几乎可以忽略(大约就差0.x个点),而且检测的分辨率比较高(800x1333往上),跑全局注意力比较困难; 分割任务受窗口注意力的影响会大一点,然后分辨率比检测低一些(例如512x512,最大到896x896),资源消耗还算可以接受,所以就跑了全局注意力
I'm really sorry for the late reply. This is likely because the current path where the code is running has not been added to the PYTHONPATH. You can try the...
Thank you very much for your interest and patience. Due to recent busyness, it's possible that I may release this code after the CVPR deadline. I hope you can understand....
Hi, I just uploaded the get_flops.py for calculating GFLOPs, you can try it.