skip-thoughts
skip-thoughts copied to clipboard
pygpu.gpuarray.GpuArrayException: out of memory on small corpus
Dear experts,
I have encountered a memory issue while attempting to train a 1 million sentence corpus using the skip thought model. In the paper, the model was trained on well in excess of 1 million sentences. From Table 1 of the paper, https://arxiv.org/pdf/1506.06726.pdf, it looks like the training set consisted of 74 million sentences.
If this is really a GPU memory limitation, how was the model in the paper trained, and on what sort of hardware specifications?
I am currently working off a AWS instance with a single Tesla K80 GPU with 12 GB of memory. The memory error is displayed below.
Thank You,
Kuhan
Traceback (most recent call last):
File "training_notes.py", line 25, in
I also am having this same issue with the NC and NV VMs in Azure. I was able to run this with ~20k lines, but any more resulted in GPU out of memory errors.
If I turn off Theano-GPU, it appears to load and run the >100k lines fine. This appears to run much slower with GPU off, but at least that's an option.