indexnet_matting icon indicating copy to clipboard operation
indexnet_matting copied to clipboard

some error use connectivity loss fuction in gpu

Open kelisiya opened this issue 4 years ago • 12 comments

Im trying to reproduce your paper . The cnn retun a tensor , but the Loss Fuction use numpy; So I use tensor to numpy and calculate loss , the numpy to tensor . I found connectivity loss can't work. How did you deal with it?

kelisiya avatar Nov 15 '19 02:11 kelisiya

Hi @kelisiya, Sorry for the late reply due to the CVPR deadline. Do you mean the training loss? or calculating the evaluation errors? The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.

poppinace avatar Nov 19 '19 09:11 poppinace

Hi @kelisiya, Sorry for the late reply due to the CVPR deadline. Do you mean the training loss? or calculating the evaluation errors? The connectivity loss is not used in training, it is only used when evaluating the matte quality. It should be fine if following my instructions to run the code.

It's work , I let your backbone to DIM and use Alpha loss and Grad loss , finally I can train your model .

kelisiya avatar Nov 19 '19 09:11 kelisiya

@kelisiya Nice, let me know if your model achieves better results:)

poppinace avatar Nov 19 '19 09:11 poppinace

Some inference questions in Indexnet:The inference code use np.clip , but network return pred(l) isn't [0-1]; Your training loss fuction use cv.normalize and then /255 is right ?  When I don't use your pretrain and training some epoch , the loss doesn't down  and the inference image have some error . I try to use torch.clamp() in network return tensor to calculate loss fuction , or use normal lize but doesn't seem to be effective. Do you know what causes these edges?

kelisiya avatar Dec 05 '19 05:12 kelisiya

@kelisiya It is normal that the network's output is not bounded by [0, 1] due to the nature of regression. This is why postprocessing is required to eliminate unreasonale outputs. However, you should not use the clip operator or clamp in pytorch during training, because the gradient will be clipped to zero either. This may be the reason why the loss does not decrease. The clip operator should be only applicable in inference.

poppinace avatar Dec 05 '19 05:12 poppinace

So are you using the cv2.normalize in your training loss fuction ? In other words , If I use sigmoid in return tensor ,it may also be effective?

kelisiya avatar Dec 05 '19 05:12 kelisiya

I don't use cv.normalize. I also tried sigmoid, but do not find it necessary.

poppinace avatar Dec 05 '19 05:12 poppinace

So only train the return tensor ,don't use any activation functions.

kelisiya avatar Dec 05 '19 05:12 kelisiya

Exactly!

poppinace avatar Dec 05 '19 05:12 poppinace

Thanks for your answer.

kelisiya avatar Dec 05 '19 05:12 kelisiya

other questions: 1.how to calculate alpha loss ?the input is alpha or alpha/255?2. The Indexnet input is image+trimap/255 like DIM?

kelisiya avatar Dec 05 '19 07:12 kelisiya

Of course you should normalize the alpha before calculating the loss. Yes, the concatenated input like DIM.

poppinace avatar Dec 09 '19 11:12 poppinace