Hierarchical-attention-networks-pytorch icon indicating copy to clipboard operation
Hierarchical-attention-networks-pytorch copied to clipboard

Hierarchical Attention Networks for document classification

Results 13 Hierarchical-attention-networks-pytorch issues
Sort by recently updated
recently updated
newest added

When I used yahoo pretrained model to fineture my data. Find a misconvergence result below: Model's parameters: {'batch_size': 128, 'num_epoches': 1, 'lr': 0.01, 'momentum': 0.9, 'word_hidden_size': 50, 'sent_hidden_size': 50, 'es_min_delta':...

https://github.com/uvipen/Hierarchical-attention-networks-pytorch/blob/b1ea9e0b7bc294364f213e42507a6fe9d502a044/src/word_att_model.py#L39

Batch size in the last batch is handled for Evaluation dataset but not for train. Resulting in wrong dimension for Hidden state for word attention net. https://github.com/uvipen/Hierarchical-attention-networks-pytorch/blob/b1ea9e0b7bc294364f213e42507a6fe9d502a044/train.py#L85

It's a wonderful text classification system! I wonder what the form should the input be. Would you like to help me?

Try to run the code on GPU. But it seems still on CPU.

When running `train.py` at the first time, adding the checker code would avoid such error: ``` FileNotFoundError: [Errno 2] No such file or directory: 'trained_models/logs.txt' ```

Hi, In your hierarchical_att_model.py, you initialized the hidden state for the GRU, with zeros. `self.word_hidden_state = torch.zeros(2, batch_size, self.word_hidden_size)` `self.sent_hidden_state = torch.zeros(2, batch_size, self.sent_hidden_size)` According to the torch documentation of...

Hi, the google drive link for pretrained models doesn't work! I request you to fix it! Thanks!