portrait_matting
portrait_matting copied to clipboard
training trick?
I have spent one month to train the portrait matting,but I can get a good model. It cannot convergence. so, if possible, could you say some tricks to train the portrait matting.
last of all, i have generate the alpha by knn matting.
if possible, could you provide the model you have trained.
@GuideWsp What is the values of alpha mask you are feeding? Is it between 0-255 or 0-1?
Also make a check at get_valid_names() in common.py to make sure you are dealing with entire dataset at a time , in my case due to some of my carelessness i was sending just the same 5 images all the time , resulting in non convergence . Make sure all of the images are being fed with valid values and not None values. cv2.imread doesnt throw an error like np.load does if it doesnt find the respective file.
before the data was sended the net, it was transformed to 0-1. at the begin of training, all the data was sended to the net, but i found that the closed-form matting may have some error ,the result value is between 76-186. so is it strange?
the alpha matte is between 0-1. and the input have transform as vggnet.
Also make a check at get_valid_names() in common.py to make sure you are dealing with entire dataset at a time , in my case due to some of my carelessness i was sending just the same 5 images all the time , resulting in non convergence . Make sure all of the images are being fed with valid values and not None values. cv2.imread doesnt throw an error like np.load does if it doesnt find the respective file.
Hi, when you test,
$ python scripts/train.py --mode {seg,seg+,seg_tri,mat} --model_path