stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: The graphics card memory is full

Open Lvjinhong opened this issue 1 year ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

RuntimeError: CUDA out of memory. Tried to allocate 10.00 MiB (GPU 0; 6.00 GiB total capacity; 5.26 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Steps to reproduce the problem

  1. I'm just the beginning python launch.py
  2. Memory almost full (6GB)
  3. Then the image cannot be generated

What should have happened?

be normal to generate

Commit where the problem happens

737eb28

What platforms do you use to access UI ?

No response

What browsers do you use to access the UI ?

No response

Command Line Arguments

No response

Additional information, context and logs

No response

Lvjinhong avatar Oct 29 '22 02:10 Lvjinhong

hmm, what gpu do you use? Normally most people will add --medvram or --xformers (for some gpus) to allow it to run on 6 or even 4GB of VRAM. If medvram doesnt work, lower it to --lowvram instead.

ClashSAN avatar Oct 29 '22 02:10 ClashSAN

hmm, what gpu do you use? Normally most people will add --medvram or --xformers (for some gpus) to allow it to run on 6 or even 4GB of VRAM. If medvram doesnt work, lower it to --lowvram instead.

Thanks.I will have a try.My Gpu is RTX 2060(Laptop)

Lvjinhong avatar Oct 29 '22 02:10 Lvjinhong

you should do both --xformers and --medvram in commandline args, xformers will speed up your generations. You'll have a good time!

ClashSAN avatar Oct 29 '22 02:10 ClashSAN

image -medvram will look like this, I think I can try the previous version

Lvjinhong avatar Oct 29 '22 03:10 Lvjinhong

I have GTX 1070 (8GB) and yet still see this error !

RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 6.33 GiB already allocated; 0 bytes free; 7.10 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

nour-s avatar Oct 30 '22 07:10 nour-s

@nour-s what parameters do you use? are you useing --lowvram or --medvram?

ClashSAN avatar Oct 30 '22 07:10 ClashSAN

@Lvjinhong that looks like a graphics card related issue. see the wiki custom parameters or ask in discussions what is the cause of the random noise output + how to fix. Raise another issue if you desire.

ClashSAN avatar Oct 30 '22 08:10 ClashSAN