stable-dreamfusion
stable-dreamfusion copied to clipboard
CUDA out of memory, how to run it in a lower RAM.
Hi, Friendes I have made a lot effort before running this amazing code, however it can only work few minute. Because of "CUDA out of memory". My card is RTX 3060 10G, it works great in stable diffusion, I wonder is there any way to run this code in a lower RAM to aviod the problem.
FULL ERROR
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 12.00 GiB total capacity; 10.69 GiB already allocated; 0 bytes free; 10.83 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF loss=0.0019 (0.0015), lr=0.007916: : 15% 15/100 [00:11<01:05, 1.30it/s]
I see this in main.py
parser.add_argument('--max_ray_batch', type=int, default=1024, help="batch size of rays at inference to avoid OOM (only valid when not using --cuda_ray)")
this arg indeed can reduce the risk of OOM.
me too have this problem, but not on local computer. I have this error on google colab
CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 14.76 GiB total capacity; 12.96 GiB already allocated; 59.75 MiB free; 13.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have the similar problem.
RuntimeError: CUDA out of memory. Tried to allocate 162.00 MiB (GPU 0; 23.65 GiB total capacity; 20.20 GiB already allocated; 94.88 MiB free; 20.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF