stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: PuLID ControNet model VRAM Leak

Open inspire-boy opened this issue 1 year ago • 1 comments

Checklist

  • [X] The issue exists after disabling all extensions
  • [X] The issue exists on a clean installation of webui
  • [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • [X] The issue exists in the current version of the webui
  • [X] The issue has not been reported before recently
  • [X] The issue has been reported before but has not been fixed yet

What happened?

This is a disscution issue. beacase I just suspect there's something wrong with the webui.

the new IPAdapter model - PuLID has vram leak problem. every time after I change a different refference face image to generate. Vram increased by 1.3G. After several times runs, OOM occurred.

reproduction in 3060-12G/3090-24G , window/linux ,v1.6 ~ v1.9.3, but official Diffusers demo and comfyui extension are okay.

微信截图_20240519225248 I just suspect there's something wrong with Vram management in Webui. Hopefully somebody can figure out it.

old isssue https://github.com/Mikubill/sd-webui-controlnet/issues/2905 PuLID feature https://github.com/Mikubill/sd-webui-controlnet/discussions/2841

Steps to reproduce the problem

1.start webui, prompt "a girl", enable ContrlNet(IP-Adapter PuLID model), upload a reference img - vram: 3.5/12GB 2. make a generation. - vram: 4.8/12GB 3. change a different referece image(not same as last), make another generation - vram: 6.2/12GB 4. change a different referece image(not same as all last), make another generation - vram: 7.5/12GB 5. change a different referece image(not same as all last), make another generation - OOM occurred - vram: 12/12GB

What should have happened?

every time generate finished, Vram drop into stable status value ,not grow up

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

bash webui.sh -f --api --no-half-vae --xformers --medvram --disable-nan-check

Console logs

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 23.69 GiB total capacity; 21.85 GiB already allocated; 26.94 MiB free; 23.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

inspire-boy avatar May 21 '24 08:05 inspire-boy

Has there been any progress on this matter recently? I still have this VRAM Leak issue with the latest version of sdwebui and contronet, sad.

CVRS-CJH avatar Apr 02 '25 08:04 CVRS-CJH