sd-webui-controlnet icon indicating copy to clipboard operation
sd-webui-controlnet copied to clipboard

[Bug]: ControlNet fills VRAM even if disabled

Open alexroeber opened this issue 1 year ago • 1 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Yesterday I ran CLIP and got a "CUDA out of memory" error which I found strange because it worked before, but I didn't care much, because everything else worked just fine. Today it worked again until I used ControlNet once. It seems like ControlNet loads something into the GPU memory and I didn't find a way of freeing that other than restarting A1111. I guess, that at least means, it's not a memory leak.

Steps to reproduce the problem

  1. Have only 8GB VRAM
  2. Run A1111 with sd-webui-controlnet installed
  3. Check that CLIP works
  4. Use ControlNet with some model(s)
  5. Check that CLIP runs out of memory. (Shown even in the UI as a text with at the end)

What should have happened?

Since ControlNet needs to load something the first time it runs any model or preprocessor (hard to tell the difference), there needs to be a way to unload ControlNet I guess. I'm not really sure about this, obviously loading something only once makes sense as you don't want to slow down your generation because you need to "load" every time. On the other hand, if I don't want to use ControlNet anymore, it shouldn't clutter the VRAM.

Commit where the problem happens

webui: python: 3.10.6  •  torch: 1.13.1+cu117  •  xformers: 0.0.16rc425  •  gradio: 3.16.2  •  commit: a9fed7c3  •  checkpoint: b971695a78 controlnet: 56194e11

What browsers do you use to access the UI ?

No response

Command Line Arguments

--xformers

Console logs

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Loading A111 WebUI Launcher
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 i   Settings file found, loading
 →   Updating Settings File  ✓
 i   Launcher Version 1.7.0
 i   Found a custom WebUI Config
 i   No Launcher launch options
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 →   Checking requirements :
 i   Python 3.10.6150.1013 found in registry:  C:\Users\alexr\AppData\Local\Programs\Python\Python310\
 i   Clearing PATH of any mention of Python
 →   Adding python 3.10 to path  ✓
 i   Git found and already in PATH:  c:\program files\git\cmd\git.exe
 i   Automatic1111 SD WebUI found:  C:\AI_Stuff\Web_UI\stable-diffusion-webui
 i   One or more checkpoint models were found
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Loading Complete, opening launcher
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 i   Arguments are now: --xformers
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 ↺   Updating Webui
 ✓   Done
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 ↺   Updating Extension: sd-webui-controlnet
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 ↺   Updating Extension: sd-webui-cutoff
 ✓   Done
 i   Arguments are now: --xformers
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  WEBUI LAUNCHING VIA EMS LAUNCHER, EXIT THIS WINDOW TO STOP THE WEBUI
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 !   Any error happening after 'commit hash : XXXX' is not related to the launcher. Please report them on Automatic1111's github instead :
 ☁   https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/new/choose
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Cancel
venv "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Installing requirements for Web UI

Launching Web UI with arguments: --autolaunch --xformers
Loading weights [b971695a78] from C:\AI_Stuff\Web_UI\stable-diffusion-webui\models\Stable-diffusion\anime_evil.safetensors
Creating model from config: C:\AI_Stuff\Web_UI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: C:\AI_Stuff\Web_UI\stable-diffusion-webui\models\VAE\huggingface.safetensors
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(1): hogwarts_em
Model loaded in 8.7s (load weights from disk: 0.1s, create model: 0.3s, apply weights to model: 4.3s, apply half(): 0.6s, load VAE: 0.4s, move model to device: 1.8s, load textual inversion embeddings: 1.1s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 27.3s (import gradio: 2.4s, import ldm: 1.0s, other imports: 2.0s, list extensions: 0.3s, load scripts: 1.4s, load SD checkpoint: 8.9s, create ui: 11.0s, gradio launch: 0.2s).
load checkpoint from C:\AI_Stuff\Web_UI\stable-diffusion-webui\models\BLIP\model_base_caption_capfilt_large.pth
Loading model: control_sd15_depth [fef5e48e]
Loaded state_dict from [C:\AI_Stuff\Web_UI\stable-diffusion-webui\extensions\sd-webui-controlnet\models\control_sd15_depth.pth]
ControlNet model control_sd15_depth [fef5e48e] loaded.
Loading preprocessor: depth_leres
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:11<00:00,  1.33s/it]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 9/9 [00:18<00:00,  2.09s/it]
Error interrogating%|████████████████████████████████████████████████████████████████████| 9/9 [00:18<00:00,  1.33s/it]
Traceback (most recent call last):
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\modules\interrogate.py", line 212, in interrogate
    matches = self.rank(image_features, items, top_count=topn)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\modules\interrogate.py", line 164, in rank
    text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\clip\model.py", line 348, in encode_text
    x = self.transformer(x)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\clip\model.py", line 203, in forward
    return self.resblocks(x)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\clip\model.py", line 191, in forward
    x = x + self.mlp(self.ln_2(x))
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\AI_Stuff\Web_UI\stable-diffusion-webui\venv\lib\site-packages\clip\model.py", line 168, in forward
    return x * torch.sigmoid(1.702 * x)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 678.00 MiB (GPU 0; 8.00 GiB total capacity; 6.47 GiB already allocated; 0 bytes free; 7.03 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

I am sorry if this is more of a ControlNet problem than a sd-webui-controlnet problem, but my guess is that this needs a change here either way.

alexroeber avatar Mar 18 '23 09:03 alexroeber

My cases:

  • CommandLine Args: --xformers --opt-sub-quad-attention --opt-split-attention
  • 4GB VRAM, 16GB RAM

[Case 1]

  1. Launch & Open Automatic1111
  2. Generate 144x144 image with Controlnet openpose preprocessor & model
  3. Done
  4. Switch preprocessor & model to canny
  5. torch.cuda.OutOfMemoryError: CUDA out of memory
  6. Switch preprocessor & model back to openpose
  7. torch.cuda.OutOfMemoryError: CUDA out of memory

[Case 2]

  1. Launch & Open Automatic1111
  2. Generate 768x768 image without Controlnet
  3. Done
  4. Generate 88x88 image with Controlnet openpose
  5. Done
  6. Disable Controlnet checkmark and generate 768x768 image
  7. torch.cuda.OutOfMemoryError: CUDA out of memory

MarkGree avatar Apr 15 '23 10:04 MarkGree

Happens to me too.

  • 4GB VRAM, 8GB RAM.
  • 2 ControlNet units

Everytime I load a different image on one of them I notice a leak in VRAM, eventually leading to a crash (out of memory) after a few changes in the pose I'm trying to achieve (using the OpenPose model).

h3rmit-git avatar Apr 27 '23 04:04 h3rmit-git