pytorch-deep-image-matting icon indicating copy to clipboard operation
pytorch-deep-image-matting copied to clipboard

performance for stage2

Open xup16 opened this issue 5 years ago • 7 comments

Hi~ Thank you for the excellent work. I have reproduced the performance of stage1 followed you codes, but I can not reproduce the performance of stage2 in the paper (50 SAD). Would you provide your performance and model of stage2 if you have tried. Thanks!

xup16 avatar Dec 13 '19 06:12 xup16

I tried stage2 training(resume from sad=54.42 model and only train the convolutions of the refine stage), but the performance is not as good as paper(our best sad=53.74). There may be some mistakes in the statge2 training codes(loss function or network structure).

huochaitiantang avatar Dec 13 '19 09:12 huochaitiantang

Thank you for the reply. Did you try stage3?Training the encoder, decoder and refine stage end to end?

xup16 avatar Dec 13 '19 09:12 xup16

Yes, I also tried the stage3 training(resume from stage2 sad=53.74 and train the whole network end-to-end) but got the worse performance(best sad= 55.48). There must exist some mistakes in the refine stage training. Maybe you could help check the implementation code.

huochaitiantang avatar Dec 13 '19 10:12 huochaitiantang

Ok. Thank you very much

xup16 avatar Dec 13 '19 10:12 xup16

Hi~ Thank you for the excellent work. 1.I have trained stage1 from scratch followed you codes, but I get 59.40.I set the lr 0.00001 constantly,but i see your code would adjust the lr.Is this the reason? 2.How do you train stage 2?I tried, but the effect was bad.(resume from stage1 59.40,batch_size=4,4cards).Can you show me the parameter set in stage2? Thank you.

AstonyJ avatar Dec 14 '19 03:12 AstonyJ

@huochaitiantang I also met the problem,I found that the composition loss is really hard to train. have you found the reason?

wrrJasmine avatar Dec 20 '19 03:12 wrrJasmine

Hi~ Thank you for the excellent work. 1.I have trained stage1 from scratch followed you codes, but I get 59.40.I set the lr 0.00001 constantly,but i see your code would adjust the lr.Is this the reason? 2.How do you train stage 2?I tried, but the effect was bad.(resume from stage1 59.40,batch_size=4,4cards).Can you show me the parameter set in stage2? Thank you.

Hi, i have trained stage 1 from scratch and i run same code. but i get 86.77. did u make any changes?

SahadevPoudel avatar Jan 02 '20 05:01 SahadevPoudel