practical-pytorch icon indicating copy to clipboard operation
practical-pytorch copied to clipboard

Error in BahdanauAttnDecoderRNN

Open hityzy1122 opened this issue 6 years ago • 1 comments

In class BahdanauAttnDecoderRNN(nn.Module), self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout_p), but the input of gru is rnn_input = torch.cat((word_embedded, context), 2) whose size is 2*hidden_size

hityzy1122 avatar Jul 19 '19 12:07 hityzy1122

nn.GRU has parameters bellow, hasn't it?

input_size – The number of expected features in the input x hidden_size – The number of features in the hidden state h num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0 bidirectional – If True, becomes a bidirectional GRU. Default: False

Michi-123 avatar Aug 14 '19 04:08 Michi-123