Deep-Image-Matting
Deep-Image-Matting copied to clipboard
May I ask which dataset are you training on?
Thank you for providing pretrained model. I have a question regarding to the trainnig dataset. Did you trained on the dataset from original paper or something else?
I tried to do testing on doll.png and troll.png from http://www.alphamatting.com dataset and got pretty bad results, not sure what happened.
I modified test.py code so that I am directly using trimap provided by the dataset, instead of generating trimap from alpha channel.
The segmentation seems to be heavily based on white area on alpha channel.
Here is my result of troll.png:
I am new to image matting, not sure where I did wrong, do we need to do some preprocessing?
@protossw512 I trained on company's dataset which can be partially seen in test_data folder, it's relatively simple. Dataset mentioned in paper is more complex so the model training on it can generalize very well. Unfortunately, I tried to train on dataset mentioned in paper, but it's really really hard to converge. I'll try to improve the performance on it in future. Thank you for your great comments!
@Joker316701882 Thanks for reply, since I am working on university research project, I think I could get the dataset from author. I will update if I could get a better model.
@protossw512 Looking forward to your result! Let me know if you make it!
@protossw512 If you have any progress send us an update.
In the paper, the authors mentioned this: "We find images on simple or plain backgrounds (Fig. 2a), including the 27 training images from [25] and every fifth frame from the videos from [26]"
Do you know how to get that video from [26]? I glanced over that paper but don't know which video to use. "[26] E. Shahrian, B. Price, S. Cohen, and D. Rajan. Temporally coherent and spatially accurate video matting. In Proceedings of Eurographics, 2012. 3"
@protossw512 if you have had a progress, may you share the trained-model with us ?
Yeah, I actually slightly modified loss function and data loading part, fixed some issue with original code that may lead to not good results, and also added refinement stage, getting some promising results, but I am not sure if I am allowed to release pretrained model, since I am training on Adobe dataset and I was not supposed to share it without permission of author. I haven't got time to refactor my code and post it on github yet. The model itself is not very hard to train, barely need to tune any parameters.
Thank you for your reply, it will be very appreciated if you share your codes at your convenience.
Yeah, I think I am going to do that during winter break
@protossw512 Hi, can you tell me how to fix the code in detail except for refinement? Specially, what's the problem about loss function and network architecture?
@protossw512 Glad to here that you achieved promising results! Can you email me your email address? If it's possible, I would like to discuss about what refinement you have made to achieve good results. Thank : )
@Joker316701882 @ChengJunqi @Ru-Xiang Yeah, sure, please email me: helloxinyao.wang###gmail.com I think one fatal mistake for your original code is the creation of trimap from alpha, you have to perfom eroding and dilation alpha, but you only applied dilation, that's why the network is basically remember the boundaries of your trimap where pixel values are equal to 255.
There should be other minor issues, I cant remember them tho. I would like to rewrite your code but I do not have any more time to do so, but I can clean them a little bit and upload to my github
@protossw512 Did you manage to get a better model?
@protossw512 Could you share the improvement in your github?
Hello, if you can't have the dataset from Adobe, check on pixabay.com. There is a large amount of quite large pictures with transparency and good alpha. For instance, try this search : https://pixabay.com/fr/photos/?min_width=640&min_height=640&colors=transparent&image_type=photo&order=latest
The only issue is that you have to download the images by hand, but with a good methodology you can obtain 300 foreground image in a couple of hour.
Combined with the backgrounds described in the DIM publications, you can obtain encouraging results without any change to the network.
@protossw512 Can you send me the Adobe datasets, I have send a email for author of Deep Image Matting, but I didn't get a reply.Thanks
@protossw512,Can you send me the Adobe datasets, I have send a email for author of Deep Image Matting, but I didn't get a reply.Thanks. My email is [email protected].
@protossw512 I know you may not be able to share your trained model as you explained authorization problem, but would you please share Joker's pretrained model? the link of his pretrained is missing ...... I'll appreciate much if you could do me this favor, thanks. my email: [email protected]
@protossw512,Can you send me the Adobe datasets, I have send a email for author of Deep Image Matting, but I didn't get a reply.Thanks. My email is [email protected].
have you got the dataset? thanks!
@protossw512 Can you send me the Adobe datasets, I have send a email for author of Deep Image Matting, but I didn't get a reply.Thanks. My email is [email protected].
Yeah, I think I am going to do that during winter break
Can you send me the Adobe datasets.Thanks!My email is [email protected]
can you please send me the Adobe dataset. The author is not answering emails. Thanks in advance. [email protected]
Could you send me the Adobe datasets? Thanks! My email is [email protected]
@protossw512 Can you send me the Adobe datasets? Thanks! My email is [email protected] I'm a CS Student and I need it for MSc thesis
Could you send me the datasets? I'm doing some related researches. Thanks!
My email is [email protected]
@protossw512
Could you share the Adobe datasets, thank you! my email is haoran.andy###gmail.com
@protossw512 Could you send me the datasets? I'm doing some related researches. Thanks! My email is [email protected]
已经好久不研究这方面的技术了
------------------ 原始邮件 ------------------ 发件人: "parachute0001"[email protected]; 发送时间: 2019年10月22日(星期二) 上午9:58 收件人: "Joker316701882/Deep-Image-Matting"[email protected]; 抄送: "Subscribed"[email protected]; 主题: Re: [Joker316701882/Deep-Image-Matting] May I ask which dataset areyou training on? (#8)
@protossw512 Could you send me the datasets? I'm doing some related researches. Thanks! My email is [email protected]
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Thank you for providing pretrained model. I have a question regarding to the trainnig dataset. Did you trained on the dataset from original paper or something else?
I tried to do testing on doll.png and troll.png from http://www.alphamatting.com dataset and got pretty bad results, not sure what happened.
I modified test.py code so that I am directly using trimap provided by the dataset, instead of generating trimap from alpha channel.
The segmentation seems to be heavily based on white area on alpha channel.
Here is my result of troll.png:
I am new to image matting, not sure where I did wrong, do we need to do some preprocessing?
Could you share the Adobe datasets, thank you very much! My email is [email protected]
@protossw512 Can you send me the Adobe datasets, I have send a email for author of Deep Image Matting, but I didn't get a reply.Thanks
Could you share the Adobe datasets, thank you very much! My email is [email protected]