pytorch_semantic_human_matting icon indicating copy to clipboard operation
pytorch_semantic_human_matting copied to clipboard

This is an unofficial implementation of the paper "Semantic human matting":

Results 12 pytorch_semantic_human_matting issues
Sort by recently updated
recently updated
newest added

请问在哪里可以下载预训练的模型呢?或者有没有好心人分享一下模型? 非常感谢!

Hi, thanks for your great work. I have a question, when i'm training m_net, why I got the output like this ![111](https://user-images.githubusercontent.com/30466896/67266833-b270c800-f4e3-11e9-88a3-a6a4f23e3d4a.png) here is my alpha label and input trimap....

作者,你好,非常感谢你的SHM,我在训练T网络的时候 输出中间结果效果不错,训练的[32 / 300] loss: 0.05447;但是测试的时候效果很差 训练中间的trimap图: ![trimap_0_111](https://user-images.githubusercontent.com/24928749/61785678-c7837f80-ae3e-11e9-94c3-e4d4ddd7b773.png) 测试结果: ![20190724180817](https://user-images.githubusercontent.com/24928749/61785743-e550e480-ae3e-11e9-8246-3d0703d7e55d.png) 我基本上没改源码,修改了dataset.py;请问作者知道原因吗?谢谢

@tsing90 已经完成t模型和m模型的训练,但是在test.py中测试m_net是遇到一个问题,运行命令python test.py --train_phase=pre_train_m_net,报错如下: use GPU Loading model from ./ckpt/pre_train_m_net/model/model_obj.pth... torch.Size([1, 3, 635, 408]) torch.Size([1, 3, 640, 408]) Traceback (most recent call last): File "test.py", line 159, in main(args) File...

在测试test里的m_net时候,发现代码跑不通,原因如下 1, trimap = np.eye(3)[trimap.reshape(-1)].reshape(list(trimap.shape) + [3]) 这里应该是准备把一通道的trimap变成三通道,但np.eye(3)[trimap.reshape(-1)]会out of bounds,3*3的对角矩阵下标越界 2,frame_seg = seg_process(args, (tensor_img, tensor_tri), net, trimap=trimap_src) 应该把最后一个参数去掉,seg_process方法没有这个参数

您好,我想问一下关于['DIM.txt','SHM.txt']这2个文件的关系,在文中你作了一定的说明,但是我有点疑惑,我个人的理解是:这2个文件里的数据没有任何关系,都属于png格式的前景图片,只是在数量上满足一定条件而已。甚至2个文件可以合二为一,将所有图片都放到一个文件夹下,然后生成一个txt文件,只需要修改一下--dataRatio参数就可以了,请问这样理解,是对的吗?如果是对的,那分成2个txt文件的意义或者优势是什么呢? --fgLists: a list, contains list files in which all images share the same fg-bg ratio, e.g. ['DIM.txt','SHM.txt']

Hey, Great work! I found there is a bug when loading data for end-to-end training. In train.py, it requires data in the following format. `img, trimap_gt, alpha_gt, bg, fg =...

When I train the TNet, I feel frustrated because the train speed is too slow. I use 'aisegmentcom-matting-human-datasets' to train this model (the number of images is about 34000), batch_size=8,...

Hi, when I do the pretrain_m_net stage, my losses are extrmely small, is that wrong? ![image](https://user-images.githubusercontent.com/30117384/62762133-f7778780-baba-11e9-8dfe-c2e05e2b4e73.png)