progressive_growing_of_gans icon indicating copy to clipboard operation
progressive_growing_of_gans copied to clipboard

Add method for reversing GAN to get latent representation for images

Open avlaskin opened this issue 6 years ago • 5 comments

Add method for reversing GAN to get latent representation for images. This can help with future utilisation of the generator network. Also this pr removes some trailing space.

avlaskin avatar May 30 '18 23:05 avlaskin

Nice!

veqtor avatar Aug 13 '18 10:08 veqtor

This is really useful. However, the latents being returned are all nan-values. I am working with half-precision floats. Is anyone else encountering the same problem?

leweohlsen avatar Dec 28 '18 09:12 leweohlsen

@avlaskin I tried to use the reverse_gan_for_etalons method with:

latents = np.random.RandomState(1).randn(1000, *Gs.input_shapes[0][1:]) # 1000 random latents
latents = latents[[0]] # hand-picked top-1
labels = np.zeros([latents.shape[0]] + Gs.input_shapes[1][1:])
img = load_image("test.png")
Gs.reverse_gan_for_etalons(latents, labels, img)

However, keep getting the error:

InvalidArgumentError (see above for traceback): Incompatible shapes: [2] vs. [0]

Appearently, it happens to at the line

gradient = tf.gradients(loss, input_latents)

The tensor input_latents seems wrong. Is it because I shouldn't construct latents from random state?

Thank you.

Wuvist avatar Feb 15 '19 06:02 Wuvist

This is really useful. However, the latents being returned are all nan-values. I am working with half-precision floats. Is anyone else encountering the same problem?

I got the same problem. It turned out that all my g values are greater than the initial c_min (1e9). I have changed it to 1e12 and obtained non-nan outputs but the actually generated images from the recovered latent space representations do not quite match my original inputs.

yjs0704 avatar Mar 07 '19 15:03 yjs0704

Thanks for this work. I was also getting nan. I was trying to reconstitue an image with an fp16 trained model on a custom dataset. i jsut put a loss = tf.reduce_sum(tf.div(tf.pow(out_expr[0] - psy, 2), 1000.)) instead of the loss you wrote, changed c_min to 1e12 and it works.

dmenig avatar Jul 20 '19 17:07 dmenig