stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Feature Request]: add VRAM Memory recovery feature
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
Everytime I hit a CUDA out of memory problem, I try to turn down the resolution and other parameters. But I still can't generate images even if have created same image in the same parameters. I opened task manager, and noticed that the dedicated GPU memory was still full, even though nothing was running. I have to reboot webui to solve this problem, but rebooting consumes lot of time. Maybe you can add a feature in setting, so that we can recover VRAM after "CUDA out of memory error".it will appreciated if you add this.
Proposed workflow
- Go to settings
- Press enable recover VRAM after an error
- VRAM get released
Additional information
Nope
As soon as you click the generate button again GPU memory is being reset: https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/processing.py#L510
I also tested and it seem to work. I wonder what goes wrong at your PC?
As soon as you click the generate button again GPU memory is being reset: https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/processing.py#L510
I also tested and it seem to work. I wonder what goes wrong at your PC?
Most memory is being reset, but not all memory is being reset.Thank you for your reply.
Most memory is being reset, but not all memory is being reset.
Can confirm. Using AMD Radeon RX 7900XT with 20464MB (20GB) of VRAM. Generate 1 image and have to restart webui.bat every time I want to generate a new image. Attempting to use Command Line ARG --opt-split-attention-v1 did allow me to generate 2 512x512 images instead of only 1, but the issue still persisted after the second image was generated. First image brought me from around 6GB to 15GB. Second image took me from 15GB to around 18GB. Third image maxed out and never made any progress when monitoring it from the CMD window. Seeking other avenues to manually clear VRAM after each completed generation until this is resolved.
Most memory is being reset, but not all memory is being reset.
Can confirm. Using AMD Radeon RX 7900XT with 20464MB (20GB) of VRAM. Generate 1 image and have to restart webui.bat every time I want to generate a new image. Attempting to use Command Line ARG --opt-split-attention-v1 did allow me to generate 2 512x512 images instead of only 1, but the issue still persisted after the second image was generated.
First image brought me from around 6GB to 15GB. Second image took me from 15GB to around 18GB. Third image maxed out and never made any progress when monitoring it from the CMD window. Seeking other avenues to manually clear VRAM after each completed generation until this is resolved.
AMD gpus are a diff story here.
As soon as you click the generate button again GPU memory is being reset: https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/processing.py#L510
I also tested and it seem to work. I wonder what goes wrong at your PC?
Most memory is being reset, but not all memory is being reset.Thank you for your reply.
Gonna give real world examples?
AMD gpus are a diff story here.
I figured as much, but thought it might help in troubleshooting at least. I don't see anywhere that OP specified what GPU is being used; so, might be facing the same issue I am if it's a problem specific to AMD GPUs. If I have to reset webui.bat every time, that's quite the tradeoff for having self-hosted SD.
AMD gpus are a diff story here.
I figured as much, but thought it might help in troubleshooting at least. I don't see anywhere that OP specified what GPU is being used; so, might be facing the same issue I am if it's a problem specific to AMD GPUs. If I have to reset webui.bat every time, that's quite the tradeoff for having self-hosted SD.
I'll have a look to see what can be done for AMD GPUs, but since I don't have one it's gonna be... interesting.
@saltyollpheist
no, this user is using nvidia, see "CUDA" reports should be separated to the direct-ml fork as memory doesn't decrease after use for many ml tasks but this isn't something controllable for a developer either. only direct-ml.
There is an extension that lets you unload the model with a button, "supermerger" this concept is useful when loading clip model using the "clip interrogator" extension
I am facing the same issue here. Unless I restart my WebUI(close from command line and restart the bat file), the server keeps on showing CUDA out of memory problem,
I am facing the same issue here. Unless I restart my WebUI(close from command line and restart the bat file), the server keeps on showing CUDA out of memory problem,
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8394
@ClashSAN
Depend on Linux we use the CUDA ROCM-HIP port. So we get the message "CUDA out of memory" on AMD GPU.