Neural-Photo-Editor icon indicating copy to clipboard operation
Neural-Photo-Editor copied to clipboard

trained faces are all blury and seems not learnt

Open ecilay opened this issue 7 years ago • 5 comments

I implemented a version in pytorch, with the same architecture illustrated in your paper and code, without orthogonal regularization and MDC though. However, my generated faces are 300k iteration are still very blury, like below. Do you have any idea why this might happen? thanks very much!! rec_step_300000

ecilay avatar May 31 '18 20:05 ecilay

There's plenty that can go wrong. Just based on these images it looks like even the VAE half of the network isn't working. I'd recommend starting by training DCGAN in PyTorch and tweaking that implementation, rather than rolling your own--there are too many details you have to get right, and even with modern updates getting just one thing wrong can break everything. You might want to consider employing modern updates (particularly spectral norm) since they do help make things more robust.

Also note that you're training on a close-crop dataset; I recommend using the wider crops for more pleasing images.

ajbrock avatar May 31 '18 21:05 ajbrock

thanks for the prompt reply! I actually trained from tweaking a VAE/GAN model, by combining encoder and discriminator into one model as described in your paper. And two loss optimizers, as shown in your code....the VAE/GAN trained fine...but my IAN train is problematic above...i will look again the code...thanks!

ecilay avatar May 31 '18 21:05 ecilay

Are you making sure to not propagate reconstruction gradients to the discriminator? I've always kept the "encoder" as a small MLP (or even a single dense layer) that operates on one of the last layers of the discriminator, but doesn't propagate gradients back to it.

ajbrock avatar May 31 '18 21:05 ajbrock

yea the loss for discriminator_encoder = bce_real + bce_reconstruction + bce_sampled_noise (bce = binary cross entropy).

I have one model for 1)discriminator_encoder; one model for 2)decoder, as a normal decoder in dcgan. The above loss is for 1)discriminator_encoder. The pseudo code for discriminator_encoder class is below:

class discriminator_encoder:
  def init():   
    self.features = ... a vector of 64*4*8*8
    self.lth_features = ... a vector of 1000
    self.output = ... a scaler
    self.mean = ...a vector of 1000
    self.logvar = ... a vector of 1000

  def forward():
    return lth_features, output, mean, logvar

then with loss defined above and

opt_discriminator_encoder = optim.Adam(discriminator_encoder.parameters())

Does this look right to you? thanks!!

ecilay avatar May 31 '18 21:05 ecilay

Hi you mentioned for encoder,

doesn't propagate gradients back to it.

then how you train encoder? not together with discriminator? thanks!!

ecilay avatar Jun 01 '18 19:06 ecilay