hierarchical-attention-networks
hierarchical-attention-networks copied to clipboard
Same cell for word and sentence level
In worker.py looking at lines 70-80, it seems you are using the same cell for word and sentence level, but it should be a different lstm cell
AFAIR Cell is/was a template/factory, not a layer or parameter container
I think I am getting you. BNLSTMCell is just a declaration of a class. Still when you say call BNLSTMCell.call(), it will still call same set of parameters which are already defined in the graph at word level as they fall in same namescope of defined parameters. So treated as one unique set of parameters, not two (word and sentence).
I am seeing a big performance difference by making this change. ( i am trying in a little more complex problem of multi-class multi-label )
cell_word = BNLSTMCell(40, is_training) # h-h batchnorm LSTMCell cell = GRUCell(30) cell_word = MultiRNNCell([cell]*5)
cell_sent = BNLSTMCell(40, is_training) # h-h batchnorm LSTMCell cell = GRUCell(30) cell_sent = MultiRNNCell([cell]*5)
Similarly if you expect a different cell for forward and backward then you should define two more cells.
please correct me if I am wrong.
Other small side thing, according to my understanding, it's a general practice to not to dropout at eval time but it's happening as defined in the code here.
You might be right, my memory of TF's conventions is quite vague at this point.