TPA-LSTM
TPA-LSTM copied to clipboard
Training stuck at epoch 1
I was deploying your model on COLAB - with the repo copied, I ran the code below:
!python main.py --mode train
--attention_len 16
--batch_size 32
--data_set muse
--dropout 0.2
--learning_rate 1e-5
--model_dir ./models/model
--num_epochs 40
--num_layers 3
--num_units 338
Everything seems to be fine at that point, but it's stuck while training - I don't know why.T_T https://colab.research.google.com/drive/1jaHTWg637wkD35C5LC5rRjej_G95L6tV?usp=sharing
I was deploying your model on COLAB - with the repo copied, I ran the code below:
!python main.py --mode train --attention_len 16 --batch_size 32 --data_set muse --dropout 0.2 --learning_rate 1e-5 --model_dir ./models/model --num_epochs 40 --num_layers 3 --num_units 338
Everything seems to be fine at that point, but it's stuck while training - I don't know why.T_T https://colab.research.google.com/drive/1jaHTWg637wkD35C5LC5rRjej_G95L6tV?usp=sharing
Hi, I have the same problem, have you found a solution?