seq2seq icon indicating copy to clipboard operation
seq2seq copied to clipboard

Visual Attention Via Seq2Seq with Attention

Open AntreasAntoniou opened this issue 8 years ago • 1 comments

Thanks for your awesome contribution. I was wondering whether I can use this to achieve visual attention. I was thinking of using the seq2seq with attention and feeding the convnet's flatten layer as the input. Would that work with this library? I would assume it would, but would that be proper visual attention? or just a weird hybrid? My main concern is that from the looks of the readme page it looks like the encoder is always an LSTM ? is that correct or am I mistaken? If I am correct, is there a way to use a series of CNN filters as the input to the decoder? Please let me know. Thanks

AntreasAntoniou avatar May 21 '16 23:05 AntreasAntoniou