practical-pytorch icon indicating copy to clipboard operation
practical-pytorch copied to clipboard

volatile=True during generation

Open kylemcdonald opened this issue 8 years ago • 1 comments

I noticed that I was getting out of memory errors when I tried to generate long sequences using the GPU. I posted about this on the forum https://discuss.pytorch.org/t/optimizing-cuda-memory-pipeline-for-rnn/3311/5 and learned that if you create volatile=True variables during generation then you can generate sequences that are indefinitely long.

kylemcdonald avatar May 24 '17 10:05 kylemcdonald

Good idea thanks, I'll add this in the next round of updates.

spro avatar Jun 02 '17 18:06 spro