Pytorch-Deeplab icon indicating copy to clipboard operation
Pytorch-Deeplab copied to clipboard

CUDA error: out of memory

Open siinem opened this issue 6 years ago • 2 comments

Hi, By not changing anything on the code, i ve ran train.py for the augmented pascal voc dataset. But im getting CUDA error. Might you have a suggestion on how i can solve this?

A detailed error message is as below.

Thanks in advance. sinem.

 File "train.py", line 234, in <module>
   main()
 File "train.py", line 213, in main
   pred = interp(model(images))
 File "/home/sinem/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
   result = self.forward(*input, **kwargs)
 File "/media/sinem/LENOVO/Pytorch-Deeplab-master/deeplab/model.py", line 261, in forward
   x = self.layer5(x)
 File "/home/sinem/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
   result = self.forward(*input, **kwargs)
 File "/media/sinem/LENOVO/Pytorch-Deeplab-master/deeplab/model.py", line 115, in forward
   out += self.conv2d_list[i+1](x)
 File "/home/sinem/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
   result = self.forward(*input, **kwargs)
 File "/home/sinem/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
   self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: out of memory

siinem avatar Nov 16 '18 16:11 siinem

Maybe you can reduce the batch size to fit your GPU memory.

speedinghzl avatar Nov 16 '18 16:11 speedinghzl

Maybe you can reduce the batch size to fit your GPU memory.

I could be able to run only when the batch size was 1 :( My GPU is GeForce GTX 1050 Ti - 4 GB .

Then, for small batch sizes, how can i freeze the BP statistics? with which command from console?

siinem avatar Nov 16 '18 17:11 siinem