pytorch-seq2seq
pytorch-seq2seq copied to clipboard
Attention type
Can somebody tell me what is the type of attention used in this lib? Because I checked against Bahdanau and Luong attentions and it doesn't look like neither or maybe I'm missing something !
actually after double checking it, it looks like its the dot attention of Luong. Is there a reason that use the dot attention and not the general one?
@ratis86 thanks for pointing this out. There's no particular reason that I'm aware of. You can contact the respective contributor for that. However we're gonna be implementing the general as well as copy attention mechanisms in the coming versions.
@pskrunner14 And also on this. Whom should I contact?
@CoderINusE you're welcome to submit a PR.
@pskrunner14 should I pass an additional argument to the attention.forward
method or It will be more clear if I create separate classes for different attention models and keep single base class?
@CoderINusE please see copy
branch. This feature is partially implemented. Just need to iron out a few bugs and write tests.
I am not sure whether the comment section in current Attention Module is a bit off? "output=tanh(w∗(attn∗context)+b∗output)" does not match with the code or the 5th equation in the paper https://arxiv.org/pdf/1508.04025.pdf unless b is also interpreted as a matrix? Thanks
I think there is a difference between math written in comments and code. The main difference is that math do linear layer with (attncontext) and concat it with output whereas written codes do concat (attncontext) and output first and after that they do projection linear layer. I am confused that order. Please tell me why there is a gap.