deep_learning_NLP icon indicating copy to clipboard operation
deep_learning_NLP copied to clipboard

han_my_functions.py --> TypeError: float() argument must be a string or a number, not 'NoneType'

Open vedtam opened this issue 3 years ago • 4 comments

Hi,

First let me thank you for the detailed and really well explained HAN example! I was looking for days for such a source to get up and running with attention visualisation in NLP.

I have prepared my data as in the description, everything runs smoothly until I get to training: han.fit_generator(..., which stops and throws:

Screen Shot 2021-01-26 at 11 59 43

I've noticed that it has to do something with the metrics, but couldn't figure out what's next.

Btw, is there a specific version of Keras and Tensorflow I should run this example? Currently I'm on tensorflow: 2.4.1 and keras: 2.4.3 (both being probably the latest)

Thanks!!

vedtam avatar Jan 26 '21 10:01 vedtam

Hi, thank you for your interest! The code was tested with Python 3.5.5 and 3.6.1, tensorflow-gpu 1.5.0, Keras 2.2.0, and gensim 3.2.0. I guess it's probably the problem (Keras has been integrated into TF now). I should regularly update the code, but don't have time. If you end up updating the code, feel free to make a pull request.

Tixierae avatar Jan 26 '21 13:01 Tixierae

Thanks so much for the details. I've created an env with the above dependencies and now things work as expected. I've been trying to add my own data, which consists of 8 categories (instead of the default 5). My dataset contains 4000 training and ~380 test samples.

After preprocessing my data (using your preprocessor script), I can load the word vectors, train a model for analysing the results, but then another error pops up: max() arg is an empty sequence, with all the acc and loss plots being blank:

Screen Shot 2021-01-26 at 18 27 36

If I proceed with re-initialising and training a model to get a visualisation of the document embeddings I hit an error again: operands could not be broadcast together with shapes (8,8) (7,7) :

Screen Shot 2021-01-27 at 00 44 43

I've updated the n_cats=8 initialy and restarted the notebook several times, but it still throws about incompatible shapes (8,8) (7,7). I'm wondering, is this because of how the programatic batch creation? Maybe the some documents in a batch doesn't have the same size? Pff, can't figure it out.

vedtam avatar Jan 26 '21 22:01 vedtam

Did you find what the problem was? It's difficult to troubleshoot this issue without a reproducible example, and I am very busy these days anyways, but could it be possibly due to your labels following a zero-based index? By default, they are assumed to follow a one-based index. Change this parameter if not: https://github.com/Tixierae/deep_learning_NLP/blob/cfd34524395537dfdfc352520ad7387c9f69bd0f/HAN/preprocessing.py#L64

Tixierae avatar Jan 28 '21 15:01 Tixierae

@Tixierae thanks! Figured it out and got the notebook working. I'm wondering, why is there so little info (after days of searching found only your's and one more) about this approach for getting the weights over words and thus being able to explain an NLP deep learning model's behaviour? Is it really that obvious so anyone (but me) can implement it. Or is it already outdated, along deep learning NLP models (there might be a better way like transformers or something)?

Thanks!

vedtam avatar Feb 09 '21 07:02 vedtam