pytorch-seq2seq icon indicating copy to clipboard operation
pytorch-seq2seq copied to clipboard

Attention type

Open ratis86 opened this issue 5 years ago • 8 comments

Can somebody tell me what is the type of attention used in this lib? Because I checked against Bahdanau and Luong attentions and it doesn't look like neither or maybe I'm missing something !

ratis86 avatar Sep 18 '18 12:09 ratis86

actually after double checking it, it looks like its the dot attention of Luong. Is there a reason that use the dot attention and not the general one?

ratis86 avatar Sep 18 '18 13:09 ratis86

@ratis86 thanks for pointing this out. There's no particular reason that I'm aware of. You can contact the respective contributor for that. However we're gonna be implementing the general as well as copy attention mechanisms in the coming versions.

pskrunner14 avatar Sep 20 '18 13:09 pskrunner14

@pskrunner14 And also on this. Whom should I contact?

rrkarim avatar Oct 06 '18 22:10 rrkarim

@CoderINusE you're welcome to submit a PR.

pskrunner14 avatar Oct 07 '18 04:10 pskrunner14

@pskrunner14 should I pass an additional argument to the attention.forward method or It will be more clear if I create separate classes for different attention models and keep single base class?

rrkarim avatar Oct 07 '18 09:10 rrkarim

@CoderINusE please see copy branch. This feature is partially implemented. Just need to iron out a few bugs and write tests.

pskrunner14 avatar Oct 07 '18 10:10 pskrunner14

I am not sure whether the comment section in current Attention Module is a bit off? "output=tanh(w∗(attn∗context)+b∗output)" does not match with the code or the 5th equation in the paper https://arxiv.org/pdf/1508.04025.pdf unless b is also interpreted as a matrix? Thanks

lmatz avatar Oct 31 '18 04:10 lmatz

I think there is a difference between math written in comments and code. The main difference is that math do linear layer with (attncontext) and concat it with output whereas written codes do concat (attncontext) and output first and after that they do projection linear layer. I am confused that order. Please tell me why there is a gap.

woaksths avatar Apr 25 '20 04:04 woaksths