neural-style
neural-style copied to clipboard
Out of memory issues
Hi, I'm trying to create large images (1024px), however I am having out of memory errors quite often. I wanted to know if anyone could help me understand if I setup everything correctly. Is this setup enough, or do I really need more memory ?
I am running a preamtible VM on Google Cloud Engine with the following specs:
n1-standard-4 (4 vCPUs, 15 GB memory)
1 x NVIDIA Tesla P100
Here is a sample combination that gets me out of memory errors:
style weight: 200
style scale: 1.5
image size: 1024
The rest is all the defaults
The error i get is the following:
"THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-6295/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory"
73:"/usr/local/bin/torch/install/bin/luajit: /usr/local/bin/torch/install/share/lua/5.1/optim/lbfgs.lua:84: cuda runtime error (2) : out of memory at /tmp/luarocks_cutorch-scm-1-6295/cutorch/lib/THC/generic/THCStorage.cu:66"
74:"stack traceback:"
75:" [C]: in function 'new'"
76:" /usr/local/bin/torch/install/share/lua/5.1/optim/lbfgs.lua:84: in function 'lbfgs'"
77:" /usr/local/bin/neural-style/neural_style.lua:303: in function 'main'"
78:" /usr/local/bin/neural-style/neural_style.lua:601: in main chunk"
79:" [C]: in function 'dofile'"
80:" .../bin/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk"
81:" [C]: at 0x00405d50"
Source
Style
are you using -backend cudnn
? you’ll often get more memory efficient results with that