training-charRNN
training-charRNN copied to clipboard
Trining on big files 25+ MB gets killed
trafficstars
I'm training the LSTM with some 80 MB files with the specified hyperparameters
python train.py --data_dir=./data --rnn_size 2048 --num_layers 2 --seq_length 256 --batch_size 128 --output_keep_prob 0.25
but after few minutes the job gets killed. Is the file too big?
I did this while using TOP and after about a minute my computer froze from processor being almost %100 my guess is that your computer killed it because it was to much to handle try an easier command or use a more powerful computer