fast-stable-diffusion icon indicating copy to clipboard operation
fast-stable-diffusion copied to clipboard

Can’t load any model (cuda out of memory ) COLAB PRO

Open Clicli99 opened this issue 1 year ago • 5 comments

Hey the lastBen, thanks for the new SD however I don’t know what’s happening but this week I’ve been flooded with issues, the latest one is that I can’t load any model, I’ve set the runtime on High and it comes “torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 14.75 GiB total capacity; 12.60 GiB already allocated; 832.00 KiB free; 13.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF”

Clicli99 avatar Jul 27 '23 07:07 Clicli99

Update: I used this code to clear the cache . Import torch torch.cuda.empty_cache()

and somehow it’s working

Clicli99 avatar Jul 27 '23 08:07 Clicli99

i tried to run the above torch.cuda.empty_cache() after loading everything else, in the cell above the start stable diffusion, unfortunately, I have not successfully loaded the sdxl model after a variety of attempts without running out of VRAM. I am running a clean install on High RAM instance of Colab.:

ile "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 14.75 GiB total capacity; 12.77 GiB already allocated; 12.81 MiB free; 13.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

jtwilkgithub avatar Jul 27 '23 14:07 jtwilkgithub

i tried to run the above torch.cuda.empty_cache() after loading everything else, in the cell above the start stable diffusion, unfortunately, I have not successfully loaded the sdxl model after a variety of attempts without running out of VRAM. I am running a clean install on High RAM instance of Colab.:

ile "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 14.75 GiB total capacity; 12.77 GiB already allocated; 12.81 MiB free; 13.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I’m still running 1.5, I’ll give the new SD some time to set things off properly before using it.

Clicli99 avatar Jul 27 '23 16:07 Clicli99

i tried to run the above torch.cuda.empty_cache() after loading everything else, in the cell above the start stable diffusion, unfortunately, I have not successfully loaded the sdxl model after a variety of attempts without running out of VRAM. I am running a clean install on High RAM instance of Colab.:

ile "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 14.75 GiB total capacity; 12.77 GiB already allocated; 12.81 MiB free; 13.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

use this before the last cell

import torch a = torch.zeros(300000000, dtype=torch.int8) a = a.cuda() del a torch.cuda.empty_cache()

spmadv avatar Aug 08 '23 08:08 spmadv

Also check your anchor_scales in the RPN, if you have many it increases the complexity of your model, try to decrease the number of anchor_scales let say if you have [2,4,8,16,32] try to make it [8]. or [2,4], or [2,4,8] etc

TonojiKiobya avatar Mar 27 '24 19:03 TonojiKiobya