SAN_pytorch
SAN_pytorch copied to clipboard
CUDA out of memory
Why CUDA out of memory .Tried to allocate 8.38GiB (GPU 0; 10.92 GiB total capacity; 8.69 GiB already allocated; 1.22GiB free;33.00 MiB cached) Why I set batch_size=1,but still CUDA out of memory.what is the reason
What are you trying to do exactly? Your 11GB GPU should be fine for inference (using test.py), even with batch_size=16 As for training, I used batch_size=16 with patch size of 48x48 on T4