PSMNet icon indicating copy to clipboard operation
PSMNet copied to clipboard

RuntimeError: CUDA out of memory.

Open SaiVinay007 opened this issue 6 years ago • 8 comments

I am trying to evaluate on KITTI 2015 test dataset. I am getting this error

 python3 submission.py --maxdisp 192 \
         --model stackhourglass \
         --KITTI 2015 \
         --datapath './dataset/testing/' \
         --loadmodel 'pretrained_model_KITTI2015.tar' \

Number of model parameters: 5224768
torch.Size([1, 3, 384, 1248]) torch.Size([1, 3, 384, 1248])
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py:2457: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0
.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=trilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))

Traceback (most recent call last):
  File "submission.py", line 116, in <module>
    main()
  File "submission.py", line 107, in main
    pred_disp = test(imgL,imgR)
  File "submission.py", line 81, in test
    output = model(imgL,imgR)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/saivinay/Documents/Stereo_Matching/code/PSMNet/models/stackhourglass.py", line 156, in forward
    pred3 = disparityregression(self.maxdisp)(pred3)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/saivinay/Documents/Stereo_Matching/code/PSMNet/models/submodule.py", line 63, in forward
    out = torch.sum(x*disp,1)
RuntimeError: CUDA out of memory. Tried to allocate 352.00 MiB (GPU 0; 3.95 GiB total capacity; 2.39 GiB already allocated; 97.88 MiB free; 593.61 MiB cached)

nvidia-smi gives :

ed Jul 17 14:11:20 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.78       Driver Version: 410.78       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   47C    P0    N/A /  N/A |    412MiB /  4040MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1533      G   /usr/lib/xorg/Xorg                           263MiB |
|    0      9285      G   compiz                                        98MiB |
|    0     28432      G   ...uest-channel-token=10673617962059914987    47MiB |
+-----------------------------------------------------------------------------+

Can someone please help me in solving this problem?

SaiVinay007 avatar Jul 17 '19 08:07 SaiVinay007

@SaiVinay007 Your GPU do not have enough memory to infer a pair of 384x1248 images. You need around 4500MB GPU memory.

JiaRenChang avatar Jul 18 '19 02:07 JiaRenChang

Thanks for the reply @JiaRenChang Is there any workaround to run the model on my system? And can you please explain how to estimate the GPU memory required to infer a pair of images or generally in any other case.

SaiVinay007 avatar Jul 18 '19 02:07 SaiVinay007

@SaiVinay007 You can downsample or crop the images. For your question, you can refer to this https://discuss.pytorch.org/t/gpu-memory-estimation-given-a-network/1713/6 .

JiaRenChang avatar Jul 18 '19 03:07 JiaRenChang

@JiaRenChang Thanks for the help.

I went through this https://discuss.pytorch.org/t/gpu-memory-estimation-given-a-network/1713/6 . I didn't understand what to specify as input_size as here we have two images as input. In submission.py :

model = nn.DataParallel(model, device_ids=[0])
se = SizeEstimator(model, input_size=())
print(se.estimate_size())

If i am giving two tuples as:

se = SizeEstimator(model, input_size=((1,3,384,1248),(1,3,384,1248)))
Traceback (most recent call last):
  File "submission.py", line 72, in <module>
    print(se.estimate_size())
  File "/home/saivinay/Documents/Stereo_Matching/code/PSMNet/pytorch_modelsize.py", line 75, in estimate_size
    self.get_output_sizes()
  File "/home/saivinay/Documents/Stereo_Matching/code/PSMNet/pytorch_modelsize.py", line 34, in get_output_sizes
    input_ = Variable(torch.FloatTensor(*self.input_size), volatile=True)
TypeError: new() received an invalid combination of arguments - got (tuple, tuple), but expected one of:
 * (torch.device device)
 * (torch.Storage storage)
 * (Tensor other)
 * (tuple of ints size, torch.device device)
      didn't match because some of the arguments have invalid types: (tuple, tuple)
 * (object data, torch.device device)
      didn't match because some of the arguments have invalid types: (tuple, tuple)

Or if i give two lists as:

se = SizeEstimator(model, input_size=([1,3,384,1248],[1,3,384,1248]))
TypeError: new() received an invalid combination of arguments - got (list, list), but expected one of:
 * (torch.device device)
 * (torch.Storage storage)
 * (Tensor other)
 * (tuple of ints size, torch.device device)
      didn't match because some of the arguments have invalid types: (list, list)
 * (object data, torch.device device)
      didn't match because some of the arguments have invalid types: (list, list)

Can you please help regarding this problem?

SaiVinay007 avatar Jul 18 '19 06:07 SaiVinay007

@SaiVinay007 Did you manage to run the submission.py script on KITTI2015 testing images?

tv12345 avatar Sep 05 '19 13:09 tv12345

@tv12345 Yeah i was able to run the script by reducing the size of images but i was not getting good results

SaiVinay007 avatar Sep 05 '19 17:09 SaiVinay007

@SaiVinay007 what resolution did you try?

tv12345 avatar Sep 17 '19 13:09 tv12345

@tv12345 I divided both the dimension by 2.

SaiVinay007 avatar Sep 22 '19 12:09 SaiVinay007