wiseodd.github.io
wiseodd.github.io copied to clipboard
Possible bug: WGAN tensorflow
Hey there,
I think I spotted a mistake in the WGAN tensorflow implementation you describe on one of your posts. You are passing the WGAN optimizing objective as well as the clipping objective to the session. The problem is that tensorflow usually parallelizes everything it can. So you probably want to make sure that the clipping op is run after the optimization, for instance with tf.control_dependencies
.
Best, Magnus
I think there is a mistake in the implementation of the wgan loss for discriminator, a negative sign is missed as the objective is to maximize the '''D_real and minimize''' the D_fake
for discriminator.