stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: RuntimeError: CUDA out of memory

Open XLCX0429 opened this issue 3 years ago • 1 comments

What happened?

I started webui.cmd, and cmd kept looping through, I know it's because my GPU memory is too small, so I want to know how to change the memory size used. My GPU is 1650 4G.

Version

0.0.1 (Default)

What browsers are you seeing the problem on?

Chrome

Where are you running the webui?

Windows

Custom settings

No response

Relevant log output

Relauncher: Launching...
LDSR not found at path, please make sure you have cloned the LDSR repo to ./models/ldsr/
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Traceback (most recent call last):
  File "scripts/webui.py", line 532, in <module>
    model, device,config = load_SD_model()
  File "scripts/webui.py", line 523, in load_SD_model
    model = load_model_from_config(config, opt.ckpt)
  File "scripts/webui.py", line 223, in load_model_from_config
    model.cuda()
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 132, in cuda
    return super().cuda(device=device)
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 688, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
    module._apply(fn)
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
    module._apply(fn)
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
    module._apply(fn)
  [Previous line repeated 4 more times]
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 601, in _apply
    param_applied = fn(param)
  File "C:\Users\XLCX_\miniconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 688, in <lambda>
    return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Relauncher: Process is ending. Relaunching in 1s...

Code of Conduct

  • [X] I agree to follow this project's Code of Conduct

XLCX0429 avatar Oct 12 '22 05:10 XLCX0429

You can't change the memory footprint of this package.

There's another version of Stable Diffusion which runs on some 4gb cards, and accepts --lowvram and --medvram parameters, runs on some 4gb cards, and has a much more diverse feature set, plugins, extensions, and more. https://github.com/AUTOMATIC1111/stable-diffusion-webui

Arguments are set by editing a file. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings

Good luck

codefaux avatar Oct 26 '22 18:10 codefaux