pix2pixHD icon indicating copy to clipboard operation
pix2pixHD copied to clipboard

should I use D feature matching loss while the discriminator is still on training?

Open Claire210222 opened this issue 3 years ago • 2 comments

#Should I use D feature matching loss while the discriminator is still on training?

recently I all applying D matching loss method directly to my pix2pix model (only one discriminator) with code

loss_G_GAN_Feat = 0 if not self.opt.no_ganFeat_loss: feat_weights = 4.0 / (self.opt.n_layers_D + 1)

            for j in range(len(pred_fake)-1):
                loss_G_GAN_Feat += D_weights * feat_weights * \
                    criterionFeat(pred_fake[j], pred_real[j].detach()) * self.opt.lambda_feat

loss_G = loss_G_GAN + loss_G_GAN_Feat

on the above , loss_G_GAN represents adversarial loss. Infact I just replace L1Loss in the pix2pix loss with D matching loss. However, the result is terrible, unable to produce pictures that similar to ground truth.

I wonder why? In papers it's said that perceptual loss is better than L1loss, but this just does not work for me. Is that because I use D feature matching loss while the discriminator is still on training? due to D is not fixed? Or some other reasons? I also apply a pre-trained vgg model to do feature matching which is with fixed params, however, although it is a little better, still it's quite low in ssim results, much lower than the original pix2pix model.

Can experts give me some help?

Claire210222 avatar May 23 '21 15:05 Claire210222

it maybe because need pixel-level loss,rather than all loss are percetual loss

shoana3226 avatar Oct 26 '21 08:10 shoana3226

you mean that use perceptual loss and pixel-level together?

Claire210222 avatar Nov 21 '21 06:11 Claire210222