Maple
Maple
It's a real user study.
ViTMatte's high memory requirement is mainly because of the attention mechanism in the ViT backbone. From my perspective, I may try [memory efficient attention](https://github.com/facebookresearch/xformers#key-features) or [flash attention](https://github.com/Dao-AILab/flash-attention) to replace the...
Images in fg_dir and bg_dir are both RGB images. As for demos, maybe you can see [here](https://paperswithcode.com/dataset/composition-1k).
@luoww1992 @yuqilol **A1:** No, In our setting, there is no requirement for the foreground and background to be associated. **A2:** I can't share the demos. Because the dataset is semi-public...
Sorry, it's not on our plan. Thanks for your attention.
Hi, Matte Anything is training-free. Its foundation matting model is ViTMatte. You can check it [here](https://github.com/hustvl/ViTMatte).
Awesome work! We will have a try!
@seawater668 Hi, it seems like the d2 version problem. Did you build d2 from its source code as [its docs](https://detectron2.readthedocs.io/en/latest/tutorials/install.html#build-detectron2-from-source)? Maybe you can refer to this [issue](https://github.com/hustvl/Matte-Anything/issues/4).
@albirrkarim Sorry to reply late. Since we are still trying to improve our quantitative results for Matte-Anything, we will not release our mask and pseudo-trimaps right away. But we will...
Great suggestion! We might have a try in the future. 😃