big-sleep icon indicating copy to clipboard operation
big-sleep copied to clipboard

RuntimeError: CUDA out of memory. Tried to allocate xxx MiB

Open TheZipCreator opened this issue 3 years ago • 4 comments

Whenever I try to use "dream" from the commandline, I get the message: RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 747.98 MiB already allocated; 0 bytes free; 778.00 MiB reserved in total by PyTorch). I don't know much about pytorch, so is this fixable or do I just not have enough memory to use this?

TheZipCreator avatar Jun 18 '21 22:06 TheZipCreator

With default settings, Big Sleep takes up close to 8GB of dedicated video ram. Some things you can try to lower the memory footprint:

  • Close all other applications when running it. Having something open like Chrome can use up memory on your graphics card and cause this error.
  • Set the --num-cutouts flag to 16
  • Try setting the --image-size to 256, and if that doesn't work, try 128 (there will be a long pause where it initializes the new model for the different image sizes)

If you want to check how much VRAM you have and you are using windows 10, you can follow an online guide like this one. I think that --num-cutouts=16 --image-size=128 should lower the memory requirements to around 4GB.

Hope that helps

wolfgangmeyers avatar Jun 21 '21 11:06 wolfgangmeyers

can i use shared gpu memory instead of dedicated?

areyougood avatar Jul 19 '21 19:07 areyougood

I have the same issue, but I am wondering if it is not using the correct GPU. I have a laptop with Optimus.

Graphics:
  Device-1: Intel CoffeeLake-H GT2 [UHD Graphics 630] driver: i915 v: kernel
  Device-2: NVIDIA GP107M [GeForce GTX 1050 Ti Mobile] driver: nvidia v: 510.47.03

because of this output:

RuntimeError: CUDA out of memory. Tried to allocate 38.00 MiB (GPU 0; 3.95 GiB total capacity; 3.18 GiB already allocated; 31.69 MiB free; 3.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

sgtnasty avatar Mar 17 '22 20:03 sgtnasty

I found this, but unsure how to tell big_sleep how to select a CUDA device. https://pytorch.org/docs/stable/notes/cuda.html

cuda = torch.device('cuda')     # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)

sgtnasty avatar Mar 17 '22 20:03 sgtnasty