sd-webui-controlnet
sd-webui-controlnet copied to clipboard
Switching controlnet models triggers CUDA memory errors
Steps to reproduce, at least on a 6GB card with --medvram
and Low VRAM
checkbox selected:
- enable controlnet in txt2img tab and put in an image
- set prompts
- select depth preprocessor and depth model
- generate successfully
- switch to canny preprocessor and canny model
- click generate and see the CUDA memory error
- switch back to depth preprocessor and depth model
- click generate and see the CUDA memory error
- stop and restart the webui, follow steps 1-3 to generate successfully once again. can be with a different combo of prep/model, doesn't seem to be tied to depth being used first.
My suspicion, based on task manager, is that the preprocessor/model don't get taken out of memory when you switch them. if you have a 12GB+ GPU which can handle having both in memory, you probably won't have this issue.
IMO models should be unloaded only when you switch to another model, not when you disable controlnet
IMO models should be unloaded only when you switch to another model, not when you disable controlnet
IMO: should be also unloaded when ControlNet is being disabled to save VRAM! Thing about users without big cards :)
Steps to reproduce, at least on a 6GB card with
--medvram
andLow VRAM
checkbox selected:
- enable controlnet in txt2img tab and put in an image
- set prompts
- select depth preprocessor and depth model
- generate successfully
- switch to canny preprocessor and canny model
- click generate and see the CUDA memory error
- switch back to depth preprocessor and depth model
- click generate and see the CUDA memory error
- stop and restart the webui, follow steps 1-3 to generate successfully once again. can be with a different combo of prep/model, doesn't seem to be tied to depth being used first.
My suspicion, based on task manager, is that the preprocessor/model don't get taken out of memory when you switch them. if you have a 12GB+ GPU which can handle having both in memory, you probably won't have this issue.
Have exact same problem, getting annoying restarting web every time,
I'm seeing this as well on an RTX 4090 with 24 GB of VRAM. Only about 35% of the VRAM is in use when it happens, so I don't think it's a VRAM limitation. This has been happening since I took an update a few days ago.
Try with the latest commit and model cache
= 1. (idk, maybe works)
Try with the latest commit and
model cache
= 1. (idk, maybe works)
Where do you put the model cache = 1
?
Try with the latest commit and
model cache
= 1. (idk, maybe works)Where do you put the
model cache = 1
?
Settings > ControlNET > Model cache size
I did some tests and looks like there is something wrong in the memory management.
I'm using 3 models at same time (openpose, depth, canny) and I have model cache = 3, the first run is ok, it loads the models and the memory usage increase as expected, but at the second run without changing anything the memory usage increase again when loading the models and I get out of memory.
In my case I have less RAM than VRAM and the RAM run out first.
Yes, the video memory will not be released at all. Even if I choose 0 cache models, it cannot be released.
img2img API!!!!