hierarchical-attention-networks icon indicating copy to clipboard operation
hierarchical-attention-networks copied to clipboard

Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is currently unmaintained, issues will probably not be addressed.

Results 17 hierarchical-attention-networks issues
Sort by recently updated
recently updated
newest added

I was wondering where in the code you are initializing the embeddings for the special tokens in the vocabulary (like unknown and padding words) - shouldn't these be set to...

Hi ematvey, Thanks for sharing the code! I notice the attention weights for sentence & word are not mask according to their actual length, which means the model will "pay...

The method [`task_specific_attention`](https://github.com/ematvey/deep-text-classifier/blob/master/model_components.py#L86) applies attention to the projected vectors instead of the hidden vectors (output from RNN cell). Has it been applied purposefully or has the information on attention according...

From the code it seems the embedding is not initialized with a pre-trained embedding (i.e. word2vec), although in the paper it says so. Am I right or I missed something?...

Hellow,While Running yelp_prepare.py, I got error log as follow. The code has been ran with Yelp dataset round10 and tensorflow 1.1.0 and Python 3.5.2 in Linux. 0it [00:00, ?it/s] Traceback...

Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.22.0 to 4.66.3. Release notes Sourced from tqdm's releases. tqdm v4.66.3 stable cli: eval safety (fixes CVE-2024-34062, GHSA-g7vv-2v7x-gj9p) tqdm v4.66.2 stable pandas: add DataFrame.progress_map (#1549) notebook: fix...

dependencies