video_to_sequence
video_to_sequence copied to clipboard
Error running model.train()
There is a error showed as following. Any ideas why the datatype does not match?
In [2]: model.train() preprocessing word counts and creating vocab based on word count threshold 10 filtered words from 12802 to 2577
ValueError Traceback (most recent call last)
/home/binwang/Documents/video_to_sequence-master/model.py in train() 237 bias_init_vector=bias_init_vector) 238 --> 239 tf_loss, tf_video, tf_video_mask, tf_caption, tf_caption_mask, tf_probs = model.build_model() 240 sess = tf.InteractiveSession() 241
/home/binwang/Documents/video_to_sequence-master/model.py in build_model(self) 46 image_emb = tf.reshape(image_emb, [self.batch_size, self.n_lstm_steps, self.dim_hidden]) 47 ---> 48 state1 = tf.zeros([self.batch_size, self.lstm1.state_size]) 49 state2 = tf.zeros([self.batch_size, self.lstm2.state_size]) 50 padding = tf.zeros([self.batch_size, self.dim_hidden])
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.pyc in zeros(shape, dtype, name) 1437 output = constant(zero, shape=shape, dtype=dtype, name=name) 1438 except (TypeError, ValueError): -> 1439 shape = ops.convert_to_tensor(shape, dtype=dtypes.int32, name="shape") 1440 output = fill(shape, constant(zero, dtype=dtype), name=name) 1441 assert output.dtype.base_dtype == dtype
/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.pyc in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype) 667 668 if ret is None: --> 669 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) 670 671 if ret is NotImplemented:
/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.pyc in _constant_tensor_conversion_function(v, dtype, name, as_ref) 174 as_ref=False): 175 _ = as_ref --> 176 return constant(v, dtype=dtype, name=name) 177 178
/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.pyc in constant(value, dtype, shape, name, verify_shape) 163 tensor_value = attr_value_pb2.AttrValue() 164 tensor_value.tensor.CopyFrom( --> 165 tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) 166 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype) 167 const_tensor = g.create_op(
/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.pyc in make_tensor_proto(values, dtype, shape, verify_shape) 366 else: 367 _AssertCompatible(values, dtype) --> 368 nparray = np.array(values, dtype=np_dt) 369 # check to them. 370 # We need to pass in quantized values as tuples, so don't apply the shape
ValueError: setting an array element with a sequence.
hello, I encounted the same error as yours. Have you solved it yet? Could you tell me how to solve it? Thanks!
@Chilicy I have solved it. I cannot remember how. I will check it tomorrow and back to you.
@BinWang28 I solved it by replace this with state1 = self.lstm1.zero_state(1, tf.float32). Thank you all the same. But when I use tf.train.AdamOptimizer, it comes out another error: ValueError: Variable Wemb/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? Do you know how to fix it?
I also encounter the same error, have you solved it? @Chilicy @BinWang28
solved the error by replacing
state1 = tf.zeros([self.batch_size, self.lstm1.state_size])
state2 = tf.zeros([self.batch_size, self.lstm2.state_size])
on line 47 48 with
state1 = self.lstm1.zero_state(self.batch_size, tf.float32)
state2 = self.lstm2.zero_state(self.batch_size, tf.float32)
U don't need to change anything unless you are willing to change your tensorflow version. 0.10.0r is fine in cpu versions. The code is written in a version which is around 0.10