Abdul Waheed

Results 22 comments of Abdul Waheed

Comment ` line 15 and 16 in UtteranceRNN.py` and re-train it. PS. Now it will take more time since RoBERTa will also be trained

You were training it on CPU?

In `config.py batch size is 64` which will cause `CUDA out of Memory`, try with smaller batch size (8 should be fine or you can go with 4 as well)...

Hi @PolKul , Yes this is normal because for each utterance we need dialogue history hence we can't parallelize the training. Although here is [Kaggle Kernel](https://www.kaggle.com/eabdul/casa-dialogue-act-classifier) to train it on...

Yes, @glicerico it will not be useful but if you have `label dictionary` for your training then it will be useful.

@Christopher-Thornton you can create a PR if you have `inferece.py` and you have tested it else I will work on it asap.

Hi @nanzhao, I have just fixed an unrelated issue and It's running on my system. Seems like you don't have GPU in your machine but the Trainer is configured for...

Use this **[Kaggle Kernel](https://www.kaggle.com/eabdul/casa-dialogue-act-classifier?scriptVersionId=52851658)** to train without doing anything but make sure you have `wandb API key`. Also, you have to enable GPU on kaggle.

@Christopher-Thornton @nanzhao there are 780 utterances with this dialogue act in training data itself and there are 43 unique dialogue acts including `"fo_o_fw_""_by_bc"` and the paper also uses 43 classes....