practical-pytorch
practical-pytorch copied to clipboard
volatile=True during generation
I noticed that I was getting out of memory errors when I tried to generate long sequences using the GPU. I posted about this on the forum https://discuss.pytorch.org/t/optimizing-cuda-memory-pipeline-for-rnn/3311/5 and learned that if you create volatile=True variables during generation then you can generate sequences that are indefinitely long.
Good idea thanks, I'll add this in the next round of updates.