self-attentive-rnn topic

List self-attentive-rnn repositories

neat-vision

249
Stars
24
Forks
Watchers

Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)

Self-Attentive-tensorflow

193
Stars
39
Forks
Watchers

Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"

Re-Implementation of "A Structured Self-Attentive Sentence Embedding" by Lin et al., 2017