cwgan-gp icon indicating copy to clipboard operation
cwgan-gp copied to clipboard

A keras implementation of conditional wgan-gp

Results 10 cwgan-gp issues
Sort by recently updated
recently updated
newest added

10182 [D loss: -7891.137695] [G loss: -3960.750488] 10183 [D loss: -7876.647461] [G loss: -3944.175293] 10184 [D loss: -7862.666992] [G loss: -3981.623291] 10185 [D loss: -7930.389160] [G loss: -3960.127441] 10186 [D...

看到有人用中文留言了,所以我也就用中文了。 首先感谢您对这个代码构建的付出,谢谢! 请教您一个问题:在cwgan-gp.py文件中 Line 106 和 Line 108这两个Input是否可以省略定义,直接沿用Line 68 和 Line 71里的定义,即 对于Line 110: img = self.generator([z_disc, label]) 不知道这样改动是否会影响generator和critic的训练。 期待您的回复,谢谢!

I have a big amount of data that needs mapping before feeding to the networks. So, instead of `(X_train, y_train), (_, _) = mnist.load_data()` I will use: `dataset = tf.data.Dataset.from_tensor_slices((X_train,...

In the paper for wgan-gp you linked, they state on page 5 "**No critic batch normalization** .... In particular, we recommend layer normalization [3] as a drop-in replacement for batch...

Hello, thanks for your excellent project. I want to train the network on my own images with 128x128x1 size. But I encountered some issuses. I have change to ``` self.img_rows...

When i use RNN generator or discriminator i get error that tf.while_loop do not support second order gradients, making unroll=True also doesn't work, because for unroll parameter time step must...

according to the definition of gradient_penalty,it's impossible that get a negative value.why did I get a positive value?

您好,我跑了一下您提交的代码,发现在原始的参数设置上,D的loss过大。 2w次迭代中,最后五组的loss为: > 1. -10472.333, 4983.048 > 2. -10244.234, 5052.324 > 3. -354.68842, -4623.4795 > 4. 116.12535, 4025.599 > 5. -10456.913, 4746.973 以我对WGAN的理解,D的loss代表的是生成器无参数估计的分布与真实分布间的EM距离,因此认为这个距离并没有达到收敛标准。肉眼观测最后Generator predicate出的结果也并不理想。请问您是否出现过这一问题,或者说2w次迭代并不足以达到收敛呢?

I used your code, and get negative D_loss value. I thought that it would converge to a positive value close to 0, but negative values do not matter? Thanks.