Xiangyu Qi

Results 19 comments of Xiangyu Qi

Thank you for your reply !!! """ beta = tf.random_uniform(imageRotated.get_shape(),minival=0.,maxval=1.) differenes = G - imageRotated interpolates = imageRotated + beta*differenes D_inter = self.discriminator(interpolates,isTraining=True,reuse=True) gradients = tf.gradients(D_inter,[interpolates])[0] slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients),reduction_indices=[1])) #...

Thank you! I will go to check it again.

> ``` > dLossReal = -tf.reduce_mean(dReal) > dLossFake = tf.reduce_mean(dFake) > self.dLoss = dLossFake + dLossReal + self.theta*GP #loss of discriminator. (with GP) > self.gLoss = -tf.reduce_mean(dFake) #loss of generator....

And another important question is : Whether the calculation of GP is right ? ... I really think it seems to be wrong... slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients),reduction_indices=[1])) # gradient penalty It...

Update I have gotten the bug. As there are norm layers in G, so its optimizer must declare the dependency. If we want to make the WGAN-GP work, the calculation...

> Cool, thanks for your experiments. > Besides the WGAN-GP, you should try spectral normalization for the discriminator, it works like a magic. Really? In discriminators? Maybe you mean generator?...

> You can take a look at the conclusions of this [paper](https://arxiv.org/pdf/1807.04720.pdf) by Google. > They said `Our fair and thorough empirical evaluation suggests that when the computational budget is...

挺快的 一张1080TI 估计十个小时不到吧

@GalAster Here is the data set : Baidu Netdisk : https://pan.baidu.com/s/1y_PBOs4xhRFsy2PK4-R7Zw Key : w1bl

Hi, thanks for pointing out this. I believe you are right --- by setting top_p = 0, it is already greedy.