Hierarchical-attention-networks-pytorch
Hierarchical-attention-networks-pytorch copied to clipboard
Is init hidden state necessary?
Hi,
In your hierarchical_att_model.py, you initialized the hidden state for the GRU, with zeros.
self.word_hidden_state = torch.zeros(2, batch_size, self.word_hidden_size)
self.sent_hidden_state = torch.zeros(2, batch_size, self.sent_hidden_size)
According to the torch documentation of GRU, the h_0 will default to zero if not provided. Do you have any reason to do this manually?
Thanks.