Attention-Based-Aspect-Extraction icon indicating copy to clipboard operation
Attention-Based-Aspect-Extraction copied to clipboard

Code for unsupervised aspect extraction, using Keras and its Backends

Results 14 Attention-Based-Aspect-Extraction issues
Sort by recently updated
recently updated
newest added

![image](https://user-images.githubusercontent.com/8291897/94181964-d28ad300-fe7a-11ea-85ec-c541d4ee0920.png) Hi, I'm getting this issue even after annotating loss function with @tf.function. I google more about it. Seems like loss function needs to be initialized at every epoch because...

Похоже, что если текст объекта пуст, то предсказание просто не записывается в `labels.txt`. В итоге, в файле предсказаний меньше строк, чем в исходной выборке. Возможно, это не единственный случай с...

Trained models are stored in `output` directory. It's better to store them in `pre_trained_model` instead? Or in some new directory...

https://drive.google.com/open?id=1L4LRi3BWoCqJt5h45J2GIAW9eP_zjiNc doesn't contain pre-trained word embeddings

@madrugado , I really don't understand why we evaluate on train... And note: we have true labels for test dataset only

Seed words are [lemmatized](https://github.com/madrugado/Attention-Based-Aspect-Extraction/blob/18fb18acc061290cfd90c25c81352675e0dad7a7/code/w2vEmbReader.py#L68) by `pymorphy2`. `preprocess.py` [uses](https://github.com/madrugado/Attention-Based-Aspect-Extraction/blob/18fb18acc061290cfd90c25c81352675e0dad7a7/code/preprocess.py#L12) `nltk.stem.wordnet.WordNetLemmatizer` instead; in `english` mode. Why? Is this inconsistency correct? /cc @madrugado