seq2seq
seq2seq copied to clipboard
Possible bug in AttentionDecoderCell
https://github.com/farizrahman4u/seq2seq/blob/1f1c3304991eb91b533e91ac5f96ee3290fa9c7d/seq2seq/cells.py#L85
instead of this: C = Lambda(lambda x: K.repeat(x, input_length), output_shape=(input_length, input_dim))(c_tm1)
shouldn't it be this (input_dim -> hidden_dim): C = Lambda(lambda x: K.repeat(x, input_length), output_shape=(input_length, hidden_dim))(c_tm1)
Because c_tm1 has dimensionality of hidden_dim
Can't agree more.