Bringing-Old-Photos-Back-to-Life icon indicating copy to clipboard operation
Bringing-Old-Photos-Back-to-Life copied to clipboard

output of domain B network varies in color

Open MengXinChengXuYuan opened this issue 3 years ago • 4 comments

Hi I decided to start to train the domain B networtk first, beacause it seems to be the most easy part But I found there are usually color shifts on the generated images, as mentioned in the section 3.4 Face Enhancement of the paper

I'm confused of the reason leading to this, in my opinion it should be very easy for the network to produce something exactly the same as the input.

Here are some results: youdu图片20210407141511 youdu图片20210407141507 youdu图片20210407141500

It's usually ligther, some times yellower, even though I added L1 loss. I think this could make the final result a litte uncontrollable.

And I would like to ask that is it possible sharing the training logs of the provided pretrained weights? Here's snippet of mine: ''' (epoch: 82, iters: 30720, time: 0.011 lr: 0.00020) G_GAN: 0.855 G_GAN_Feat: 1.952 G_VGG: 2.144 G_KL: 0.915 D_real: 0.380 D_fake: 0.363 Smooth_L1: 0.083 (epoch: 82, iters: 33920, time: 0.011 lr: 0.00020) G_GAN: 0.815 G_GAN_Feat: 1.943 G_VGG: 2.100 G_KL: 0.917 D_real: 0.433 D_fake: 0.335 Smooth_L1: 0.082 (epoch: 82, iters: 37120, time: 0.011 lr: 0.00020) G_GAN: 0.819 G_GAN_Feat: 2.025 G_VGG: 2.185 G_KL: 0.935 D_real: 0.411 D_fake: 0.350 Smooth_L1: 0.095 (epoch: 82, iters: 40320, time: 0.011 lr: 0.00020) G_GAN: 0.802 G_GAN_Feat: 1.988 G_VGG: 2.160 G_KL: 0.926 D_real: 0.423 D_fake: 0.355 Smooth_L1: 0.083 '''

MengXinChengXuYuan avatar Apr 07 '21 06:04 MengXinChengXuYuan

Maybe you should remove L1 loss: For L1 loss, model tends to generate colors which is the median number of your training space. By the way, what datasets do you use? The VOC2012 or?

syfbme avatar Apr 07 '21 08:04 syfbme

Maybe you should remove L1 loss: For L1 loss, model tends to generate colors which is the median number of your training space. By the way, what datasets do you use? The VOC2012 or?

But without L1 loss it's still the same in my experiment :( I'm using ffhq for now, in my case I only care about protrait

MengXinChengXuYuan avatar Apr 07 '21 09:04 MengXinChengXuYuan

@raywzy @zhangmozhe Hi is it possible sharing the training logs of the provided weights? And if with some training intermediate generated images, that would be great. I just want to figure out if my training configuration is ok, and how long does it take to get a reasonable weight

MengXinChengXuYuan avatar Apr 09 '21 09:04 MengXinChengXuYuan

@raywzy @zhangmozhe Hi is it possible sharing the training logs of the provided weights? And if with some training intermediate generated images, that would be great. I just want to figure out if my training configuration is ok, and how long does it take to get a reasonable weight

Hello, I am very interested in this project. How to download the datasets used in this paper? Thank you in advance.

hello-trouble avatar Aug 01 '21 14:08 hello-trouble