traja icon indicating copy to clipboard operation
traja copied to clipboard

attention layer on top of LSTMs

Open Saran-nns opened this issue 3 years ago • 0 comments

Attention mechanisms seem to improve the time series prediction/forecasting and classification performance sample paper

Deep learning models in traja can easily accommodate the attention layer

  1. Create a self-attention mechanism wrapper Reference
  2. Inject the attention layer instance on top of LSTM layers before and after encoding. Example here and here
  3. Add optional boolean arg for attention in autoencoding(ae, vae, vaegan) base models.

Saran-nns avatar May 04 '21 08:05 Saran-nns