Deep-Image-Matting
Deep-Image-Matting copied to clipboard
How to get a trimap when there no GT in the paper
It's an issue about the paper, maybe not appropriate here. But I can't get contact with the paper's author, so maybe some useful discussion here. In the paper, three data-set used: alpha-matting, composition-1k and the real image. The first two have GT, so the trimap can be generated by dilation. However, the third one, i.e. the real image data-set, which is according to the paper "pulled from the internet as well as images provided by the ICCV 2013 tutorial on image matting", has no GT. So where is the tripmap come from when inferring?
I wonder the same. Any help with this here?
@Sh0lim I get a GT using semantic segmentation network like mask-rcnn or mobilenet.
@ypflll Thank you. What do you thing, can saliency detection be other approach. In that way we do not need is to train network for specific classes. What I mean is that, with saliency it can be generalize to any object. Am I right? For example using this: https://sites.google.com/site/ligb86/mdfsaliency/
Yes. Trimap can be generated by a rough mask, so I think saliency detection can be used here.
Good. Thx
Yes. Trimap can be generated by a rough mask, so I think saliency detection can be used here.
Hi, I wanna know you generate a rough mask that only has foreground and background? How do you deal with the third class "unknown area", and if you don't have the "unknown area" class, your deep image matting model just care foreground and background? In this situation is the model work well? Thanks!
Yes. Trimap can be generated by a rough mask, so I think saliency detection can be used here.
I still don't understand how to get a trimap .Which net you use to get a mask?Pls let me know
I use mask-rcnn or deeplabv3+.