TextClassification-Keras icon indicating copy to clipboard operation
TextClassification-Keras copied to clipboard

Text classification models implemented in Keras, including: FastText, TextCNN, TextRNN, TextBiRNN, TextAttBiRNN, HAN, RCNN, RCNNVariant, etc.

Results 16 TextClassification-Keras issues
Sort by recently updated
recently updated
newest added

Bumps [numpy](https://github.com/numpy/numpy) from 1.17.2 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...

dependencies

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.0.1 to 2.7.2. Release notes Sourced from tensorflow's releases. TensorFlow 2.7.2 Release 2.7.2 This releases introduces several vulnerability fixes: Fixes a code injection in saved_model_cli (CVE-2022-29216) Fixes...

dependencies

IMDB dataset is tokenized by words, so after reshape in https://github.com/ShawnyXiao/TextClassification-Keras/blob/master/model/HAN/main.py#L20-L23 word-level dimension contain the whole words tokens, not chars/wordparts. Does it make sense at all? If it is used...

Hi, Thank you for these codes. I wonder if it is possible to get attention weights vector on word-level and also sentence-level attention in Han model? I want to plot...

Attention类实现感觉有问题,本人使用TextBiRNN模型来跑,出现以下错误: Traceback (most recent call last): File "train.py", line 86, in <module> last_activation='softmax').get_model() File "/media/gaoya/disk/Applications/keras/短文本分类/model.py", line 282, in get_model x_word = Attention(self.maxlen_word)(x_word) File "/media/gaoya/disk/Applications/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper return func(*args,...

您好,在基于您的代码进行学习的过程中,想查看一下注意力机制为我的文本数据分配的权重,即TextAttBiRNN中attention计算中的a。 但是我在成功输出a后发现,我每句话的每个词都被分配了同样的权重,例如一句话23个单词,每个单词的权重都是1/23。 百思不得其解,在这里想向您请教一下有可能是什么原因造成的呢?数据集按理说不应该有问题的,直接基于RNN做效果也不错。非常抱歉问题这么基础,希望您能解答,感谢!!

Hi Shawny, I run your code RCNN, and I found the accuracy was 0.5. Do you know why this happened? Looking forward to your reply. Thanks!