Recurrent-Knowledge-Graph-Embedding
Recurrent-Knowledge-Graph-Embedding copied to clipboard
How do you implement attention-gated hidden layer
I find there only exists one file named "LSTMTagger.py" which implements RKGE model. But in this file, the output of LSTM layer is used directly to form the final h which is the input of maxpooling layer. Maybe I omit something important. So could you tell me how you implement attention-gated hidden layer?