keras-attention icon indicating copy to clipboard operation
keras-attention copied to clipboard

Visualizing RNNs using the attention mechanism

Results 23 keras-attention issues
Sort by recently updated
recently updated
newest added

#one sample example Input = [1015 4 2 0 0 0 0 0 0 0] output = [ 65 116 2 0 0 0 0 0 0 0] (formated in...

The initial state construction can be simplified as done in this change.

Hello Thanks a lot for providing easy to understand tutorial and attention layer implementation. I am trying to use attention on a dataset with different input and output length. My...

`UnicodeEncodeError: 'charmap' codec can't encode characters in position 5-9: character maps to ` Using `encoding='utf-8'` on the `open`s fixed it as this stackoverflow post has: https://stackoverflow.com/questions/44391671/python3-unicodeencodeerror-charmap-codec-cant-encode-characters-in-position

help wanted

In the build function, what's the significance of resetting the states ? `if self.stateful: super(AttentionDecoder, self).reset_states()`

Observing your code and trying to work with different input and output lengths,I saw that in AttentionDecoder implementation for return probabilites = True,the shape of returned probabilites is (None, self.timesteps,...

Hi, I am new to the attention mechanism and I found your codes, tutorials very helpful to beginners like me! Currently, I am trying to use your attention decoder to...

Hi Zafarali, I am trying to use your attention network to learn seq2seq machine translation with attention. My spurce lang output vocab is of size 32,000 and target vocab size...

I'm fairly new to this and for some reason I'm having ![crazy instabilities issues](http://139.82.47.36:8888/files/rrsayao/64%3B%20256%3B%2064%3B%2040%3B%2017%3B%20124%3B%200.2%3B%200.2%3B%20200%3B%20199%3B%20200%3B%200%3B%200.85423%3B%200.83730%3B%20titan%3B%200.8.jpg) during training. I've witnessed over 10% decrease in validation accuracy at some point. It's a many-to-many...

I am using your decoder to implement my sequence encoder/decoder but actually i don't know how can I do to get the decoder output the same shape as my input....