pix2pixHD
pix2pixHD copied to clipboard
should I use D feature matching loss while the discriminator is still on training?
#Should I use D feature matching loss while the discriminator is still on training?
recently I all applying D matching loss method directly to my pix2pix model (only one discriminator) with code
loss_G_GAN_Feat = 0 if not self.opt.no_ganFeat_loss: feat_weights = 4.0 / (self.opt.n_layers_D + 1)
for j in range(len(pred_fake)-1):
loss_G_GAN_Feat += D_weights * feat_weights * \
criterionFeat(pred_fake[j], pred_real[j].detach()) * self.opt.lambda_feat
loss_G = loss_G_GAN + loss_G_GAN_Feat
on the above , loss_G_GAN represents adversarial loss. Infact I just replace L1Loss in the pix2pix loss with D matching loss. However, the result is terrible, unable to produce pictures that similar to ground truth.
I wonder why? In papers it's said that perceptual loss is better than L1loss, but this just does not work for me. Is that because I use D feature matching loss while the discriminator is still on training? due to D is not fixed? Or some other reasons? I also apply a pre-trained vgg model to do feature matching which is with fixed params, however, although it is a little better, still it's quite low in ssim results, much lower than the original pix2pix model.
Can experts give me some help?
it maybe because need pixel-level loss,rather than all loss are percetual loss
you mean that use perceptual loss and pixel-level together?