show-attend-and-tell
show-attend-and-tell copied to clipboard
ValueError in train.py
work@lab-server03:~/ljz/show-attend-and-tell-master$ python train.py
image_idxs <type 'numpy.ndarray'> (399998,) int32
file_names <type 'numpy.ndarray'> (82783,) <U55
word_to_idx <type 'dict'> 23110
features <type 'numpy.ndarray'> (82783, 196, 512) float32
captions <type 'numpy.ndarray'> (399998, 17) int32
Elapse time: 198.26
image_idxs <type 'numpy.ndarray'> (19589,) int32
file_names <type 'numpy.ndarray'> (4052,) <U51
features <type 'numpy.ndarray'> (4052, 196, 512) float32
captions <type 'numpy.ndarray'> (19589, 17) int32
Elapse time: 3.67
Traceback (most recent call last):
File "train.py", line 25, in
did you change the line "self.optimizer = tf.train.AdamOptimizer" in init ?
@arieling No, I didn't. Since the version of my tensorflow is 1.1.0, I only replace some basic functions, now the error is as follows:
work@lab-server03:~/ljz/show-attend-and-tell-master$ python train.py
image_idxs <type 'numpy.ndarray'> (399998,) int32
file_names <type 'numpy.ndarray'> (82783,) <U55
word_to_idx <type 'dict'> 23110
features <type 'numpy.ndarray'> (82783, 196, 512) float32
captions <type 'numpy.ndarray'> (399998, 17) int32
Elapse time: 13.43
image_idxs <type 'numpy.ndarray'> (19589,) int32
file_names <type 'numpy.ndarray'> (4052,) <U51
features <type 'numpy.ndarray'> (4052, 196, 512) float32
captions <type 'numpy.ndarray'> (19589, 17) int32
Elapse time: 0.65
Traceback (most recent call last):
File "train.py", line 25, in
@JaneLou You should use TensorFlow 0.11 to run the code in this repo.
In file core/solver.py, try to change
tf.get_variable_scope().reuse_variables()
_, _, generated_captions = self.model.build_sampler(max_len=20)
with tf.name_scope('optimizer'):
optimizer = self.optimizer(learning_rate=self.learning_rate)
grads = tf.gradients(loss, tf.trainable_variables())
grads_and_vars = list(zip(grads, tf.trainable_variables()))
train_op = optimizer.apply_gradients(grads_and_vars=grads_and_vars)
to
with tf.variable_scope(tf.get_variable_scope()) as scope:
with tf.name_scope('optimizer'):
tf.get_variable_scope().reuse_variables()
_, _, generated_captions = self.model.build_sampler(max_len=20)
optimizer = self.optimizer(learning_rate=self.learning_rate)
grads = tf.gradients(loss, tf.trainable_variables())
grads_and_vars = list(zip(grads, tf.trainable_variables()))
train_op = optimizer.apply_gradients(grads_and_vars=grads_and_vars)
This can help fix the error "ValueError: Variable conv_featuresbatch_norm/beta/Adam/ does not exist". I am testing it with tensorflow 1.0.0
@jiecaoyu Thanks a lot! I finally refer to the file core/new_solve.py from https://github.com/chychen/caption_generation_with_visual_attention it works for me!