Caused by op 'net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert', defined at:
File "eval_gan.py", line 86, in
eval()
File "eval_gan.py", line 28, in eval
g = Graph(is_training=False)
File "/home/t-haxi/S2SCycleGAN/Tacotron_GAN/train.py", line 50, in init
is_training=is_training) # (N, T', hp.n_mels*hp.r)
File "/home/t-haxi/S2SCycleGAN/Tacotron_GAN/networks.py", line 156, in decode1
dec = attention_decoder(dec, memory, num_units=hp.embed_size) # (N, T', E)
File "/home/t-haxi/S2SCycleGAN/Tacotron_GAN/modules.py", line 255, in attention_decoder
dtype=tf.float32) #( N, T', 16)
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py", line 601, in dynamic_rnn
state = cell.zero_state(batch_size, dtype)
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py", line 1319, in zero_state
self._batch_size_checks(batch_size, error_message)):
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py", line 1238, in _batch_size_checks
for attention_mechanism in self._attention_mechanisms]
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py", line 1238, in
for attention_mechanism in self._attention_mechanisms]
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/check_ops.py", line 405, in assert_equal
return control_flow_ops.Assert(condition, data, summarize=summarize)
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 118, in wrapped
return _add_should_use_warning(fn(*args, **kwargs))
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 172, in Assert
return gen_logging_ops._assert(condition, data, summarize, name="Assert")
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_logging_ops.py", line 51, in _assert
name=name)
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/home/t-haxi/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): assertion failed: [When calling zero_state of AttentionWrapper attention_wrapper: Non-matching batch sizes between the memory (encoder output) and the requested batch size. Are you using the BeamSearchDecoder? If so, make sure your encoder output has been tiled to beam_width via tf.contrib.seq2seq.tile_batch, and the batch_size= argument passed to zero_state is batch_size * beam_width.] [Condition x == y did not hold element-wise:] [x (net/decoder1/attention_decoder/rnn/strided_slice:0) = ] [32] [y (net/decoder1/attention_decoder/LuongAttention/strided_slice_1:0) = ] [9]
I met this error when I ran eval_gan.py, any one know how to fix this?