BigGAN-Tensorflow icon indicating copy to clipboard operation
BigGAN-Tensorflow copied to clipboard

wan-gp

Open yaxingwang opened this issue 5 years ago • 7 comments

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

yaxingwang avatar May 11 '19 06:05 yaxingwang

wgan-gp took a strangely long time. I haven't found a cause yet.

taki0112 avatar May 13 '19 09:05 taki0112

Thanks. In fact wgan-lp also does not work

yaxingwang avatar May 13 '19 10:05 yaxingwang

I have the same problem. It seems that gradient penalty couldn't be back-propagated successfully.

As I figured out, tf.gradients() cannot calculate gradients and stuck in it. But almost all gradient penalty in WGAN-GP implemented like this.

Cannot solve this. Anyone has suggestions?

syning94 avatar Jul 04 '19 02:07 syning94

I I wonder why there is nothing in the results folder during the training phase.I look forward to hearing from you

xuhui1994 avatar Jul 16 '19 06:07 xuhui1994

I try to use wgan-gp , it stuck a long time. I even think it doesn't work at that time.

xuhui1994 avatar Jul 16 '19 07:07 xuhui1994

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

Hi, I try this code with a small amount of data and there is a error about ResourceExhaustedError, so I want to know how to change the set in the code "gpu_device = '/gpu:0'" to use 4 gpus? thank you!

Orchid0714 avatar Aug 19 '19 08:08 Orchid0714

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

hi i m getting memory error: Total size of variables: 198818145 Total bytes of variables: 795272580 [] Reading checkpoints... [] Failed to find a checkpoint [!] Load failed... i m using nvidia geforce rtx 6gb memory with 32 gb ram. Can you solve this??

manvirvirk avatar Apr 23 '20 08:04 manvirvirk