efficient_densenet_pytorch icon indicating copy to clipboard operation
efficient_densenet_pytorch copied to clipboard

What is the minimum GPU memory required? Still breaks for me in a single GPU

Open PabloRR100 opened this issue 6 years ago • 1 comments

Amazon p3.2xlarge: 1 GPUs - Tesla V100 -- GPU Memory: 16GB -- Batch Size = 64 If efficient = False:
Error: RuntimeError: CUDA out of memory. Tried to allocate 1024.00 KiB (GPU 0; 15.75 GiB total capacity; 14.71 GiB already allocated; 4.88 MiB free; 4.02 MiB cached)

If efficient = True: Error: RuntimeError: CUDA out of memory. Tried to allocate 61.25 MiB (GPU 0; 15.75 GiB total capacity; 14.65 GiB already allocated; 50.88 MiB free; 5.33 MiB cached)


Amazon g3.4xlarge: 1 GPUs - Tesla M60 -- GPU Memory: 8GB -- Batch Size = 64

If efficient = False:
RuntimeError: CUDA out of memory. Tried to allocate 184.00 MiB (GPU 0; 7.44 GiB total capacity; 6.98 GiB already allocated; 25.81 MiB free; 5.57 MiB cached)

If efficient = True:
RuntimeError: CUDA out of memory. Tried to allocate 184.00 MiB (GPU 0; 7.44 GiB total capacity; 6.98 GiB already allocated; 25.81 MiB free; 5.57 MiB cached)

PabloRR100 avatar Feb 08 '19 19:02 PabloRR100

What version of PyTorch are you using? I can run both the efficient and non-efficient models on my 8GB GPU.

Are you just running the demo, using the default settings?

gpleiss avatar Feb 14 '19 01:02 gpleiss