pytorch_semantic_human_matting
pytorch_semantic_human_matting copied to clipboard
How big is your dataset?
Hi,
Your demo result is great, and I wanna reproduce results like yours by creating a new dataset.
So I wanna know how many images (or how many high-quality alpha matte) did you use to train your model?
From what I know , the DIM dataset has 202 forground humans, the SHM dataset has 34311, and the dataset in paper "A Late Fusion CNN for Digital Matting" has 228.
Also, I can relate to the fact the T-Net is pretty hard to train. So another question is that, the GT trimaps you use, are they annotated manually or dilated like in these papers?
Thanks for the great work!