practical-pytorch icon indicating copy to clipboard operation
practical-pytorch copied to clipboard

Question about Luong Attention Implementation

Open kyquang97 opened this issue 5 years ago • 7 comments

Hi @spro, i've read your implementation of luong attention in pytorch seq2seq translation tutorial and in the context calculation step, you're using rnn_output as input when calculating attn_weights but i think we should hidden at current decoder timestep instead. Please check it and can you provide explaination about it if i'm wrong image

kyquang97 avatar May 12 '19 09:05 kyquang97

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

beebrain avatar Aug 14 '19 05:08 beebrain

@beebrain Please correct me if I'm wrong, but you are using the Lstm layer, instead of the lstm cell, so when each forward pass happens its a different sample not a different timestep on a single sample. You have no control over each timesteps separately here. what you get out of the RNN in this configuration is just a translation/sequence that has already gone through all timesteps!

Coderx7 avatar Oct 25 '19 03:10 Coderx7

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

syorami avatar Dec 01 '19 08:12 syorami

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

beebrain avatar Dec 01 '19 09:12 beebrain

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally?

syorami avatar Dec 04 '19 11:12 syorami

@kyquang97 Luong takes the last context vector and concatenates them with the last output vector as an input to RNN. The output from RNN will be passed to Attention layer to calculate the context vector in the current time step. The current context vector combines with the current output from RNN will be calculated to the current output for this time step. Please noted that the current context vector will be passed to the next time step.

I think he just meant that in this implementation, the rnn_output is fed into the attention layer instead of the current decoder hidden state, which is inconsistent with the description in the original paper.

I think the rnn_output and hidden output of self.gru had the same value. You can use hidden or rnn_output.

You do remind me! I'm also confused by the usage of outputs and hidden states in some attention implementations at first and they do actually share the same values. BTW, what about the LSTM? From Pytorch doc, the LSTM outputs hidden states as well as cell states. Are cell states used in attention or can I just consider using outputs and last hidden states equally?

In my opinion, You can use the hidden state output like GRU.

beebrain avatar Dec 04 '19 15:12 beebrain

I am also confused about why we can calculate all the attention scores for the source sentence using the previous hidden state and current input embedding.

richardsun-voyager avatar Jan 13 '20 03:01 richardsun-voyager