pixel-cnn
                                
                                 pixel-cnn copied to clipboard
                                
                                    pixel-cnn copied to clipboard
                            
                            
                            
                        "Object was never used" errors after upgrading to Tensorflow 1.2
After upgrading to Tensorflow 1.2, running train.py produces many (567) of the following error:
ERROR:tensorflow:==================================
Object was never used (type <class 'tensorflow.python.framework.ops.Tensor'>):
<tf.Tensor 'model_1/conv2d_0/stack:0' shape=(3,) dtype=int32>
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
['File "train.py", line 135, in <module>\n    dropout_p=args.dropout_p, **model_opt)', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/template.py", line 261, in __call__\n    return self._call_func(args, kwargs, check_for_new_variables=True)', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/template.py", line 217, in _call_func\n    result = self._func(*args, **kwargs)', 'File "/media/bruce/MoreData/malcolm/tpc/base-pixel-cnn/pixel_cnn_pp/model.py", line 40, in model_spec\n    x_pad, num_filters=nr_filters, filter_size=[2, 3]))]  # stream for pixels above', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args\n    return func(*args, **current_args)', 'File "/media/bruce/MoreData/malcolm/tpc/base-pixel-cnn/pixel_cnn_pp/nn.py", line 358, in down_shifted_conv2d\n    return conv2d(x, num_filters, filter_size=filter_size, pad=\'VALID\', stride=stride, **kwargs)', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args\n    return func(*args, **current_args)', 'File "/media/bruce/MoreData/malcolm/tpc/base-pixel-cnn/pixel_cnn_pp/nn.py", line 238, in conv2d\n    tf.assert_variables_initialized([V, g, b])', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n    return _add_should_use_warning(fn(*args, **kwargs))', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n    wrapped = TFShouldUseWarningWrapper(x)', 'File "/home/malcolm/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n    stack = [s.strip() for s in traceback.format_stack()]']
==================================
It still seems to run OK after printing the errors, though. The culprit seems to be this line, which appears three times in pixel_cnn_pp/nn.py:
tf.assert_variables_initialized([V, g, b])
It seems that this function expects its return value to be used, and that TF 1.2 introduced a check to print an error if the return value isn't used. As a workaround, commenting out this line (in each place it appears) removes the errors.
I am experiencing this too.
me too.
me three, here is a bit more context on how I came across this error:
    input_ = tf.placeholder(tf.int32, [None, None], name='input')
    targets = tf.placeholder(tf.int32, [None, None], name='target')
    learning_rate = tf.placeholder(tf.float32, name='learning_rate')
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    len_source = tf.placeholder(tf.int32, [None], name='source_sequence_length')
    len_target = tf.placeholder(tf.int32, [None], name='target_sequence_length')
    max_target = tf.reduce_max(len_target, name='max_target_len')
    
    return (input_, targets, learning_rate, keep_prob, len_target, max_target, len_source)
I ran the above in a jupyter cell and got the error.
me four. Any official solutions?
me five, any solution wor work arround?
I've got this error too. I think you all would better to check that the function which got issue whether deprecated or not. In my case, the function 'initialize_all_variables' was deprecated in version 1.3.
I was able to solve the problem for my particular use case by passing a "shape" for scalar placeholders. So something like this:
learning_rate = tf.placeholder(tf.float32, name='learning_rate', shape=())
keep_prob = tf.placeholder(tf.float32, name='keep_prob', shape=())
instead of
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
I believe what was happening (in my case), was that I had unit tests making assertions against the shape of these variables but not using them, hence the error. There must have been an extra validation check added for this in tensorflow 1.2
I have same issues with this
my placeholder define like this:
` global_step = tf.get_variable('global_step', [],
                                      initializer = tf.constant_initializer(0),dtype=tf.int32)
        # global_step = tf.Variable(0,name='global_step')
        #global_step=tf.get_variable('global_step', shape=[], initializer=tf.zeros_initializer(), dtype=tf.int32, trainable=False)
        x = tf.placeholder(tf.float32,shape=[None,num_input],name="mfcc_input")
        y = tf.placeholder(tf.int32,shape=[None,num_classes],name="labels")
        is_training = tf.placeholder(dtype=bool, shape=[], name="is_training")
        q_selector = tf.cond(is_training,lambda:[train_mfcc_batch,train_labels_batch],lambda:[test_mfcc_batch,test_labels_batch])`
and it just shows this error when I try to run on the different machine with GPUs (distributed tensorflow):
`2017-08-25 12:02:50.917393: W tensorflow/core/framework/op_kernel.cc:1148] Invalid argument: Shape [-1,13] has negative dimensions
2017-08-25 12:02:50.917476: E tensorflow/core/common_runtime/executor.cc:644] Executor failed to create kernel. Invalid argument: Shape [-1,13] has negative dimensions
    [[Node: mfcc_input = Placeholder[dtype=DT_FLOAT, shape=[?,13], _device="/job:worker/replica:0/task:0/gpu:0"]()]]
2017-08-25 12:02:50.918477: W tensorflow/core/framework/op_kernel.cc:1148] Invalid argument: Shape [-1,13] has negative dimensions
2017-08-25 12:02:50.918525: E tensorflow/core/common_runtime/executor.cc:644] Executor failed to create kernel. Invalid argument: Shape [-1,13] has negative dimensions
   [[Node: mfcc_input = Placeholder[dtype=DT_FLOAT, shape=[?,13], _device="/job:worker/replica:0/task:0/gpu:0"]()]]
2017-08-25 12:02:50.920048: W tensorflow/core/framework/op_kernel.cc:1148] Invalid argument: Shape [-1,3] has negative dimensions
2017-08-25 12:02:50.920092: E tensorflow/core/common_runtime/executor.cc:644] Executor failed to create kernel. Invalid argument: Shape [-1,3] has negative dimensions
   [[Node: labels = Placeholder[dtype=DT_INT32, shape=[?,3], _device="/job:worker/replica:0/task:0/gpu:0"]()]]
ERROR:tensorflow:==================================
Object was never used (type ):
If you want to mark it as used call its "mark_used()" method.
It was originally created here:
['File "sound_classifier_distributed_supervisor.py", line 164, in \n    train_op = rep_op.minimize(xent,global_step=global_step)', 'File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 325, in minimize\n    name=name)', 'File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/sync_replicas_optimizer.py", line 252, in apply_gradients\n    variables.global_variables())', 'File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/tf_should_use.py", line 170, in wrapped\n    return _add_should_use_warning(fn(*args, **kwargs))', 'File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/tf_should_use.py", line 139, in _add_should_use_warning\n    wrapped = TFShouldUseWarningWrapper(x)', 'File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/tf_should_use.py", line 96, in __init__\n    stack = [s.strip() for s in traceback.format_stack()]']`   
                                    
                                    
                                    
                                
I met this issue before. What I did to solve this is that I put shape = [] for learning_rate and keep_prob
learning_rate = tf.placeholder(tf.float32, shape=[], name='learning_rate') keep_prob = tf.placeholder(tf.float32, shape=[], name='keep_prob')
me twelve
me thirteen at tensorflow-gpu 1.11.0 for tf.assert_variables_initialized([V,g,b])
me fourteen
me fifteen. Any suggestions?
me 6teen
me 7teen
me 8teen
me 9teen
me 20teen
Me 21! I can drink now. Yay!
Me tuentitu
23 =\
+25
+26
I think the real problem is that we went from 23 to 25 (failed the counting test) me 24th.
27
I had a similar issue after upgrading to tf1.12. In my case, the error was thrown at the following line:
tf.assert_equal(tf.shape(x), tf.shape(y)) <-- error // rest of code
The fix was to place the assert command in a control dependency block, as follows:
with tf.control_dependencies([tf.assert_equal(tf.shape(x), tf.shape(y))]): // rest of code
28 -- what is the last known version of tensorflow where this error did not occur?
29
30 -- any one got any solution?
31 :( I am using tensorflow 1.11