Neural-Photo-Editor
Neural-Photo-Editor copied to clipboard
trained faces are all blury and seems not learnt
I implemented a version in pytorch, with the same architecture illustrated in your paper and code, without orthogonal regularization and MDC though.
However, my generated faces are 300k iteration are still very blury, like below. Do you have any idea why this might happen? thanks very much!!

There's plenty that can go wrong. Just based on these images it looks like even the VAE half of the network isn't working. I'd recommend starting by training DCGAN in PyTorch and tweaking that implementation, rather than rolling your own--there are too many details you have to get right, and even with modern updates getting just one thing wrong can break everything. You might want to consider employing modern updates (particularly spectral norm) since they do help make things more robust.
Also note that you're training on a close-crop dataset; I recommend using the wider crops for more pleasing images.
thanks for the prompt reply! I actually trained from tweaking a VAE/GAN model, by combining encoder and discriminator into one model as described in your paper. And two loss optimizers, as shown in your code....the VAE/GAN trained fine...but my IAN train is problematic above...i will look again the code...thanks!
Are you making sure to not propagate reconstruction gradients to the discriminator? I've always kept the "encoder" as a small MLP (or even a single dense layer) that operates on one of the last layers of the discriminator, but doesn't propagate gradients back to it.
yea the loss for discriminator_encoder = bce_real + bce_reconstruction + bce_sampled_noise (bce = binary cross entropy).
I have one model for 1)discriminator_encoder; one model for 2)decoder, as a normal decoder in dcgan. The above loss is for 1)discriminator_encoder. The pseudo code for discriminator_encoder class is below:
class discriminator_encoder:
def init():
self.features = ... a vector of 64*4*8*8
self.lth_features = ... a vector of 1000
self.output = ... a scaler
self.mean = ...a vector of 1000
self.logvar = ... a vector of 1000
def forward():
return lth_features, output, mean, logvar
then with loss defined above and
opt_discriminator_encoder = optim.Adam(discriminator_encoder.parameters())
Does this look right to you? thanks!!
Hi you mentioned for encoder,
doesn't propagate gradients back to it.
then how you train encoder? not together with discriminator? thanks!!