SqueezeSegV3 icon indicating copy to clipboard operation
SqueezeSegV3 copied to clipboard

GPU memory

Open Xiangxu-0103 opened this issue 4 years ago • 5 comments

Hi, when I run the code with the model SSGV321, it always prompts me that CUDA OUT OF MEMORY until I set the batch_size to 1. My GPU memory is 12G, TITAN XP. I want to know how much GPU memory you use.

Xiangxu-0103 avatar May 16 '20 05:05 Xiangxu-0103

Hi, we train our model on eight Titan RTX (24G) GPUs. For SSGV321, the batch size is 2 for each GPU and it costs about 16G on each one. For SSGV353, the batch size is 1 for each GPU and it costs about 16G too. The reason that it needs many GPU memory is that the models consist of many tensor unfolding operation, the implementation of which in Pytorch is extremely memory-consuming during training. We are working at reducing the memory and speed now. Thanks for your interest.

chenfengxu714 avatar May 16 '20 06:05 chenfengxu714

Hi, I have the same problem. "CUDA OUT OF MEMORY", when batch_size > 1. and get "RuntimeError: cuda runtime error", when batch_size=1. image

I am using the NVIDIA 2080ti with 11GB memory

lyhdet avatar Jan 25 '21 08:01 lyhdet

I set the "cudnn.benchmark = False", solve the "CUDA runtime error".

lyhdet avatar Jan 25 '21 08:01 lyhdet

Hello, will there be a light version that requires less GPU memory? really looking forward to it.

Solacex avatar Apr 27 '21 11:04 Solacex

RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1

Stone-sy avatar Aug 26 '21 13:08 Stone-sy