Attention-OCR
Attention-OCR copied to clipboard
`tf.contrib.rnn.core_rnn_cell.BasicLSTMCell` should be replaced by `tf.contrib.rnn.BasicLSTMCell`
For Tensorflow 1.2 and Keras 2.0, the line tf.contrib.rnn.core_rnn_cell.BasicLSTMCell
should be replaced by tf.contrib.rnn.BasicLSTMCell
.
$ ./train_demo.sh
017-06-30 16:09:13,025 root INFO ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
File "src/launcher.py", line 146, in <module>
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 142, in main
session = sess)
File "/home/math/Github/Attention-OCR/src/model/model.py", line 151, in __init__
use_gru = use_gru)
File "/home/math/Github/Attention-OCR/src/model/seq2seq_model.py", line 87, in __init__
single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'
and
$ sh test_demo.sh
2017-06-30 16:10:13.765890: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765918: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765927: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765933: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13.765938: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 16:10:13,766 root INFO loading data
2017-06-30 16:10:13,767 root INFO phase: test
2017-06-30 16:10:13,767 root INFO model_dir: model_01_16
2017-06-30 16:10:13,767 root INFO load_model: True
2017-06-30 16:10:13,767 root INFO output_dir: model_01_16/synth90
2017-06-30 16:10:13,767 root INFO steps_per_checkpoint: 500
2017-06-30 16:10:13,767 root INFO batch_size: 1
2017-06-30 16:10:13,767 root INFO num_epoch: 3
2017-06-30 16:10:13,767 root INFO learning_rate: 1
2017-06-30 16:10:13,768 root INFO reg_val: 0
2017-06-30 16:10:13,768 root INFO max_gradient_norm: 5.000000
2017-06-30 16:10:13,768 root INFO clip_gradients: True
2017-06-30 16:10:13,768 root INFO valid_target_length inf
2017-06-30 16:10:13,768 root INFO target_vocab_size: 39
2017-06-30 16:10:13,768 root INFO target_embedding_size: 10.000000
2017-06-30 16:10:13,768 root INFO attn_num_hidden: 256
2017-06-30 16:10:13,768 root INFO attn_num_layers: 2
2017-06-30 16:10:13,768 root INFO visualize: True
2017-06-30 16:10:13,768 root INFO buckets
2017-06-30 16:10:13,768 root INFO [(16, 32), (27, 32), (35, 32), (64, 32), (80, 32)]
2017-06-30 16:10:13,768 root INFO ues GRU in the decoder.
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
Traceback (most recent call last):
File "src/launcher.py", line 146, in <module>
main(sys.argv[1:], exp_config.ExpConfig)
File "src/launcher.py", line 142, in main
session = sess)
File "/home/math/Github/Attention-OCR/src/model/model.py", line 151, in __init__
use_gru = use_gru)
File "/home/math/Github/Attention-OCR/src/model/seq2seq_model.py", line 87, in __init__
single_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
AttributeError: 'module' object has no attribute 'core_rnn_cell'
see https://github.com/da03/Attention-OCR/pull/47
replace this code in place of previous code:
basic_cell = tf.contrib.rnn.DropoutWrapper( tf.contrib.rnn.BasicLSTMCell(emb_dim, state_is_tuple=True), output_keep_prob=self.keep_prob) # stack cells together : n layered model stacked_lstm = tf.contrib.rnn.MultiRNNCell([basic_cell] * num_layers, state_is_tuple=True)
Try to replace Line no. 87 by
single_cell = tf.contrib.rnn.rnn_cell.BasicLSTMCell(attn_num_hidden, forget_bias=0.0, state_is_tuple=False)
replacing "core_rnn_cell" by "rnn_cell" solves the issue of tensorflow 0.12.1 and python 3
In my case I replaced tf.contrib.rnn.core_rnn_cell.BasicLSTMCell
with tf.contrib.rnn.BasicLSTMCell
and I replaced every rnn.core_rnn_cell
with just rnn
and it was working.