AOT-GAN-for-Inpainting
AOT-GAN-for-Inpainting copied to clipboard
Confuse about the my_layer_norm function and GAN loss function
Hi, thank you for your excellent work, I get a good result when run the demo. But when I read the source code, I get some problems about GAN loss function.
In the paper, the dis loss about fake_img should be self.loss_fn(d_fake, gauss(1 - mask)), but I find you just do gauss(mask), Is there something wrong with my understanding?
whatmore, the dis loss about real_img should be self.loss_fn(d_real, d_real_label), where d_real_label is torch.ones(...), but you write it to torch.zeros(...).
By the way, could you explain the work of my_layer_norm in the AOT block?
Thanks.
I have the same confuse about the gan loss part... the gan loss in the code seem not do adversarial train
The same question
g_fake_label = torch.ones_like(g_fake).cuda() is wrong for gan, g_fake_label = torch.zeros_like(g_fake).cuda() is right. The author handle the wrong with parser.add_argument('--adv_weight', type=float, default=0.01,help='loss weight for adversarial loss'),no use for adv loss for training for netG.
d_fake_label = gaussian_blur(masks, (self.ksize, self.ksize), (10, 10)).detach().cuda()
d_real_label = torch.zeros_like(d_real).cuda()
# g_fake_label = torch.ones_like(g_fake).cuda()
g_fake_label = torch.zeros_like(g_fake).cuda()
dis_loss = self.loss_fn(d_fake[masks>0.5], d_fake_label[masks>0.5]) + self.loss_fn(d_real, d_real_label)
gen_loss = self.loss_fn(g_fake[masks>0.5], g_fake_label[masks>0.5])