MaskCLIP
MaskCLIP copied to clipboard
Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)
Dear author, thanks for your great work. When testing maskplus with CLIP of RN50, the inference speed is very slow, can u check that? This is the command: ``` ./tools/dist_test.sh...
Dear authors. Thank you for your nice work, and congratulations that MaskCLIP is accepted to ECCV as an oral paper! I have tried your code to run a single image...
Thanks for the wonderful paper and repo. I was able to reproduce MaskClip and MaskClip+ with ViT-B/16 + R101 on Pascal context dataset. The result mAp is 25.45 and 29.48...
Hello, thanks for your nice code and nice paper! One question I wonder is that when I see the code, there are no code show that pre-train weight load to...
Hi I'm super interested in this paper. Currently we are retraining the MaskCLIP+ and the results do not seem so goos as in the paper. May I ask when will...
Why does MakClip's 1x1 convolution need no training? Also, why can't I see the 1x1 position in the code?
When I run the command "python tools/maskclip_utils/convert_clip_weights.py --model ViT16 --backbone", it gives the following error  Could anybody help to solve it? Thanks a lot!
Hi and thanks for you're work on MaskCLIP. I just read the paper and tried to follow it using you're provided codebase. In the paper you mention that you alter...
What's the different between 1 x 1 and linear? why should we do the replacement?
Have any plan to share the pre-trained weight ? Thanks.