stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: CUDA out of memory when generate pictures on web UI

Open Frocean opened this issue 3 years ago • 5 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

It showed that the CUDA out of memory on web UI when I tried to generate every picture. I don't know how to deal with it and I'm not sure what details should I list there. I've read the Troubleshooting of windows in wiki and, although there are some similar problem in Issue, They don't work on my PC.

Steps to reproduce the problem

  1. Open the interface of web UI (use Microsoft Edge)
  2. Set parameters
  3. Press generate button and wait
  4. RuntimeError: CUDA out of memory.

What should have happened?

I think it should work.

Commit where the problem happens

No response

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

//webui-user.py
@echo off

set PYTHON=C:\Users\Frocean\AppData\Local\Programs\Python\Python310\python.exe
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--medvram --opt-split-attention--lowvram

git pull
start http://127.0.0.1:7860/

call webui.bat

Additional information, context and logs

Given below are my error messages on web UI after I press the generate button. RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Time taken: 0.07sTorch active/reserved: 3489/3558 MiB, Sys VRAM: 4096/4096 MiB (100.0%)

Frocean avatar Oct 20 '22 04:10 Frocean

Traceback in cmd:

  • Traceback (most recent call last): File "D:\NovelAIleak\stable-diffusion-webui-master\modules\ui.py", line 212, in f res = list(func(*args, **kwargs)) File "D:\NovelAIleak\stable-diffusion-webui-master\webui.py", line 63, in f res = func(*args, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\txt2img.py", line 44, in txt2img processed = process_images(p) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\processing.py", line 411, in process_images samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\processing.py", line 549, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\sd_samplers.py", line 417, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\sd_samplers.py", line 326, in launch_sampling return func() File "D:\NovelAIleak\stable-diffusion-webui-master\modules\sd_samplers.py", line 417, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\sampling.py", line 80, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\sd_samplers.py", line 263, in forward x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond=uncond) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 987, in apply_model x_recon = self.model(x_noisy, t, **cond) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl result = forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 1410, in forward out = self.diffusion_model(x, t, context=cc) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 732, in forward h = module(h, emb, context) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 85, in forward x = layer(x, context) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\attention.py", line 258, in forward x = block(x, context=context) File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\attention.py", line 209, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\diffusionmodules\util.py", line 127, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\NovelAIleak\stable-diffusion-webui-master\repositories\stable-diffusion\ldm\modules\attention.py", line 212, in _forward x = self.attn1(self.norm1(x)) + x File "D:\NovelAIleak\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "D:\NovelAIleak\stable-diffusion-webui-master\modules\sd_hijack_optimizations.py", line 107, in split_cross_attention_forward s2 = s1.softmax(dim=-1, dtype=q.dtype) RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 4.00 GiB total capacity; 3.32 GiB already allocated; 0 bytes free; 3.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Frocean avatar Oct 20 '22 05:10 Frocean

(Add) Firstly I launch web UI and generate, it takes 7.11s and then showed RuntimeError, and after that, each subsequent generate only takes less than 0.1s.

Frocean avatar Oct 20 '22 05:10 Frocean

Backup and make changes to the file: models\Stable-diffusion\your model.yaml

Find use_ema and changed False, this worked for me

Coder-Sakura avatar Oct 20 '22 09:10 Coder-Sakura

You can try the following command line:

set COMMANDLINE_ARGS=--lowvram --xformers

Please copy to use and have enough RAM available or open virtual memory

mwbdcz avatar Oct 20 '22 14:10 mwbdcz

Looks like you only have a 4GB card so you are operating close to minimum VRAM. You need to stick to very small images and low (or no) batches. There some command line args you can use to help with low VRAM. The error message you provided at the top indicates what you need to look up

mayofiddler avatar Oct 20 '22 18:10 mayofiddler

I'm sorry for my late reply Refer to the above method, --xformers works better for me Thanks everyone, now my web UI are running properly

Frocean avatar Oct 21 '22 02:10 Frocean