tf-adversarial icon indicating copy to clipboard operation
tf-adversarial copied to clipboard

reparameterization trick?

Open ericjang opened this issue 9 years ago • 1 comments

This is really cool! It looks like you were able to get good results by sampling gen_z: np.random.uniform(-1., 1., size=(GENERATOR_BATCH,GENERATOR_SEED)).astype(np.float32)

on each train step without using the reparameterization trick, which was surprising to me (see links below). I would think that this training scheme leads to a highly discontinuous generator function, but that doesn't seem to be the case. Do you know how this worked?

  • http://dpkingma.com/files/nips_workshop_v2.pdf (see page 11)
  • https://www.reddit.com/r/MachineLearning/comments/3yrzks/eli5_the_reparameterization_trick/
  • http://arxiv.org/abs/1312.6114

ericjang avatar Jan 16 '16 18:01 ericjang

Ok I am not sure If I understood the RT correctly, but If I did, then it is asking to multiply gradient by p(z) when optimizing G(z). However since z comes from uniform distribution, the trick is equivalent to multiplying gradient by a constant, which can be corrected by choosing learning rate appropriately.

Let me know if I misunderstand.

siemanko avatar Jan 16 '16 22:01 siemanko