progressive_growing_of_gans
progressive_growing_of_gans copied to clipboard
Add method for reversing GAN to get latent representation for images
Add method for reversing GAN to get latent representation for images. This can help with future utilisation of the generator network. Also this pr removes some trailing space.
Nice!
This is really useful. However, the latents being returned are all nan
-values. I am working with half-precision floats. Is anyone else encountering the same problem?
@avlaskin I tried to use the reverse_gan_for_etalons method with:
latents = np.random.RandomState(1).randn(1000, *Gs.input_shapes[0][1:]) # 1000 random latents
latents = latents[[0]] # hand-picked top-1
labels = np.zeros([latents.shape[0]] + Gs.input_shapes[1][1:])
img = load_image("test.png")
Gs.reverse_gan_for_etalons(latents, labels, img)
However, keep getting the error:
InvalidArgumentError (see above for traceback): Incompatible shapes: [2] vs. [0]
Appearently, it happens to at the line
gradient = tf.gradients(loss, input_latents)
The tensor input_latents
seems wrong. Is it because I shouldn't construct latents from random state?
Thank you.
This is really useful. However, the latents being returned are all
nan
-values. I am working with half-precision floats. Is anyone else encountering the same problem?
I got the same problem. It turned out that all my g values are greater than the initial c_min (1e9). I have changed it to 1e12 and obtained non-nan outputs but the actually generated images from the recovered latent space representations do not quite match my original inputs.
Thanks for this work. I was also getting nan. I was trying to reconstitue an image with an fp16 trained model on a custom dataset. i jsut put a loss = tf.reduce_sum(tf.div(tf.pow(out_expr[0] - psy, 2), 1000.))
instead of the loss you wrote, changed c_min to 1e12 and it works.