ezgan icon indicating copy to clipboard operation
ezgan copied to clipboard

Error possibly linked to definition of variable scope

Open shirishr opened this issue 8 years ago • 6 comments

Error:

ValueError: Variable d_w1/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

at line : with tf.variable_scope(tf.get_variable_scope(), reuse=False) as scope:

shirishr avatar Jul 02 '17 18:07 shirishr

same issue here

bluebirdlboro avatar Oct 21 '17 10:10 bluebirdlboro

Have you tried restarting the notebook and running from the first cell? Many of these scope errors come from repeated definitions that you'd get from running an individual cell multiple times without re-initializing the graph.

jonbruner avatar Oct 22 '17 04:10 jonbruner

I have tried restarting and running from the first cell. If it was repeatedly defined, then it should be already exist but does not exist.

bluebirdlboro avatar Oct 30 '17 08:10 bluebirdlboro

This is the whole error info. I got.

ValueError                                Traceback (most recent call last)
<ipython-input-4-f781a98cf3e0> in <module>()
     31 # Increasing from 0.001 in GitHub version
     32 with tf.variable_scope(tf.get_variable_scope(), reuse=False) as scope:
---> 33     d_trainer_fake = tf.train.AdamOptimizer(0.0001).minimize(d_loss_fake, var_list=d_vars)
     34     d_trainer_real = tf.train.AdamOptimizer(0.0001).minimize(d_loss_real, var_list=d_vars)
     35 

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)
    323 
    324     return self.apply_gradients(grads_and_vars, global_step=global_step,
--> 325                                 name=name)
    326 
    327   def compute_gradients(self, loss, var_list=None,

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py in apply_gradients(self, grads_and_vars, global_step, name)
    444                        ([str(v) for _, _, v in converted_grads_and_vars],))
    445     with ops.control_dependencies(None):
--> 446       self._create_slots([_get_variable_for(v) for v in var_list])
    447     update_ops = []
    448     with ops.name_scope(name, self._name) as name:

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/adam.py in _create_slots(self, var_list)
    130     # Create slots for the first and second moments.
    131     for v in var_list:
--> 132       self._zeros_slot(v, "m", self._name)
    133       self._zeros_slot(v, "v", self._name)
    134 

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py in _zeros_slot(self, var, slot_name, op_name)
    764     named_slots = self._slot_dict(slot_name)
    765     if _var_key(var) not in named_slots:
--> 766       named_slots[_var_key(var)] = slot_creator.create_zeros_slot(var, op_name)
    767     return named_slots[_var_key(var)]

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/slot_creator.py in create_zeros_slot(primary, name, dtype, colocate_with_primary)
    172     return create_slot_with_initializer(
    173         primary, initializer, slot_shape, dtype, name,
--> 174         colocate_with_primary=colocate_with_primary)
    175   else:
    176     val = array_ops.zeros(slot_shape, dtype=dtype)

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/slot_creator.py in create_slot_with_initializer(primary, initializer, shape, dtype, name, colocate_with_primary)
    144       with ops.colocate_with(primary):
    145         return _create_slot_var(primary, initializer, "", validate_shape, shape,
--> 146                                 dtype)
    147     else:
    148       return _create_slot_var(primary, initializer, "", validate_shape, shape,

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/training/slot_creator.py in _create_slot_var(primary, val, scope, validate_shape, shape, dtype)
     64       use_resource=_is_resource(primary),
     65       shape=shape, dtype=dtype,
---> 66       validate_shape=validate_shape)
     67   variable_scope.get_variable_scope().set_partitioner(current_partitioner)
     68 

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
   1063       collections=collections, caching_device=caching_device,
   1064       partitioner=partitioner, validate_shape=validate_shape,
-> 1065       use_resource=use_resource, custom_getter=custom_getter)
   1066 get_variable_or_local_docstring = (
   1067     """%s

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    960           collections=collections, caching_device=caching_device,
    961           partitioner=partitioner, validate_shape=validate_shape,
--> 962           use_resource=use_resource, custom_getter=custom_getter)
    963 
    964   def _get_partitioned_variable(self,

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    365           reuse=reuse, trainable=trainable, collections=collections,
    366           caching_device=caching_device, partitioner=partitioner,
--> 367           validate_shape=validate_shape, use_resource=use_resource)
    368 
    369   def _get_partitioned_variable(

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
    350           trainable=trainable, collections=collections,
    351           caching_device=caching_device, validate_shape=validate_shape,
--> 352           use_resource=use_resource)
    353 
    354     if custom_getter is not None:

~/workspace/jupyter/env3/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
    680       raise ValueError("Variable %s does not exist, or was not created with "
    681                        "tf.get_variable(). Did you mean to set reuse=None in "
--> 682                        "VarScope?" % name)
    683     if not shape.is_fully_defined() and not initializing_from_value:
    684       raise ValueError("Shape of a new variable (%s) must be fully defined, "

ValueError: Variable d_w1/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

bluebirdlboro avatar Oct 30 '17 08:10 bluebirdlboro

it seems to me, the reason might be:

in

Dx = discriminator(x_placeholder)
# Dx hold the discriminator's prediction probabilities
# for real MNIST images

Dg = discriminator(Gz, reuse=True)
# Dg holds discriminator prediction probabilities for generated images

you set reuse as Dg = discriminator(Gz, reuse=True)

therefore, in the code `def discriminator(x_image, reuse=False): if (reuse): tf.get_variable_scope().reuse_variables()

# First convolutional and pool layers
# These search for 32 different 5 x 5 pixel features
d_w1 = tf.get_variable('d_w1', [5, 5, 1, 32], initializer=tf.truncated_normal_initializer(stddev=0.02))
d_b1 = tf.get_variable('d_b1', [32], initializer=tf.constant_initializer(0))
d1 = tf.nn.conv2d(input=x_image, filter=d_w1, strides=[1, 1, 1, 1], padding='SAME')
d1 = d1 + d_b1
d1 = tf.nn.relu(d1)
d1 = tf.nn.avg_pool(d1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

when you tried to get variable d_w1, since the reuse has been set to True, it gives you the error, d_w1/adam does not exist..

I am not very sure about that....do you think that might be it?

bluebirdlboro avatar Oct 30 '17 08:10 bluebirdlboro

Thanks for pointing this out; I think this has to do with a TensorFlow version update that came along after I published this. I'll try to find time to update the code, but in the meantime you might have better luck with this similar project that's more up-to-date and should be compatible with TF 1+: https://github.com/jonbruner/generative-adversarial-networks

jonbruner avatar Oct 30 '17 22:10 jonbruner