text-to-image
text-to-image copied to clipboard
Error: d_bn1/d_bn1_2/moments/Squeeze/ExponentialMovingAverage/ does not exist
On running generate_images script, following are errors received. Could be please suggest a fix for this? Thanks
==================================================== python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8
Traceback (most recent call last):
File "generate_images.py", line 106, in
I am struggling with the same error, is there any fix?
Try adding the below line before ema.apply in ops.py with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
This resolved this error for me.
I'm using TensorFlow v0.12. This works for me:
add with tf.variable_scope(tf.get_variable_scope(), reuse=False):
before ema.apply
where should with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE): add? before ema_apply_op = self.ema.apply([batch_mean, batch_var])? but it does not work. @ravindra82
i use with tf.variable_scope(tf.get_variable_scope(), reuse=False): replace with tf.variable_scope(self.name,reuse=tf.AUTO_REUSE) as scope: in ops.py. but i get ValueError: Trying to share variable beta, but specified shape (256,) and found shape (512,). what is wrong?
i use with tf.variable_scope(tf.get_variable_scope(), reuse=False): replace with tf.variable_scope(self.name,reuse=tf.AUTO_REUSE) as scope: in ops.py. but i get ValueError: Trying to share variable beta, but specified shape (256,) and found shape (512,). what is wrong? @ravindra82
@gentlebreeze1 did you find solution ? because i have the same problem.
i use with tf.variable_scope(self.name,reuse=tf.AUTO_REUSE) as scope: in ops.py. but an occured like this
WARNING:tensorflow:From /home/hp/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "train.py", line 238, in sigmoid_cross_entropy_with_logits
with named arguments (labels=..., logits=..., ...)
how can i solve this pls reply as soon as possible
@remyavijeesh22
Use g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead of g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))
@remyavijeesh22 Use
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead ofg_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))
I changed that. The error then goes on to show that
d_loss1
bla bla bla, so I added logits=
and labels=
the same style in g_loss
. I made changes for d_loss2
and d_loss3
as well.
When I run $ python2 generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8
Another error
Tensor name "d_bn1_1/moments/Squeeze/ExponentialMovingAverage" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[node save/RestoreV2 (defined at generate_images.py:66) ]]
Any suggestion?
I tried all of the above. Then you will see the following error. Could be please suggest a fix for this?
===================================================================Traceback (most recent call last):
File "train.py", line 238, in
I tried all of the above. Then you will see the following error. Could be please suggest a fix for this?
===================================================================Traceback (most recent call last): File "train.py", line 238, in main() File "train.py", line 78, in main d_optim = tf.train.AdamOptimizer(args.learning_rate, beta1 = args.beta1).minimize(loss['d_loss'], var_list=variables['d_vars']) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 413, in minimize name=name) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 597, in apply_gradients self._create_slots(var_list) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\adam.py", line 131, in _create_slots self._zeros_slot(v, "m", self._name) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\optimizer.py", line 1155, in _zeros_slot new_slot_variable = slot_creator.create_zeros_slot(var, op_name) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 190, in create_zeros_slot colocate_with_primary=colocate_with_primary) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 164, in create_slot_with_initializer dtype) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\training\slot_creator.py", line 74, in _create_slot_var validate_shape=validate_shape) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1496, in get_variable aggregation=aggregation) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1239, in get_variable aggregation=aggregation) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 562, in get_variable aggregation=aggregation) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 514, in _true_getter aggregation=aggregation) File "C:\Users\c0116\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 882, in _get_single_variable "reuse=tf.AUTO_REUSE in VarScope?" % name) ValueError: Variable d_h0_conv/w/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?
did you solve this problem? i also meet the same error that i have no way to solve it, can you give me some suggestion?
@remyavijeesh22 Use
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead ofg_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))
I changed that. The error then goes on to show that
d_loss1
bla bla bla, so I addedlogits=
andlabels=
the same style ing_loss
. I made changes ford_loss2
andd_loss3
as well.When I run
$ python2 generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8
Another error
Tensor name "d_bn1_1/moments/Squeeze/ExponentialMovingAverage" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt [[node save/RestoreV2 (defined at generate_images.py:66) ]]
Any suggestion?
hello,have you solved this problem?
@remyavijeesh22 Use
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = disc_fake_image_logits, labels = tf.ones_like(disc_fake_image)))
instead ofg_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(disc_fake_image_logits, tf.ones_like(disc_fake_image)))
I changed that. The error then goes on to show that
d_loss1
bla bla bla, so I addedlogits=
andlabels=
the same style ing_loss
. I made changes ford_loss2
andd_loss3
as well. When I run$ python2 generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8
Another errorTensor name "d_bn1_1/moments/Squeeze/ExponentialMovingAverage" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt [[node save/RestoreV2 (defined at generate_images.py:66) ]]
Any suggestion?
hello,have you solved this problem?
hey were you able to fix this?
same problem T_T