stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

Ideas&suggestions:The program needs to be optimized, and the GPU shared memory of the graphics card is not used when running.

Open mayjack0312 opened this issue 2 years ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I can use this program normally, but when I run it, I checked the system resource manager and found that although the GPU usage of the graphics card has reached 100%, the utilization rate has just reached 7%, which shows that the program does not fully utilize all the graphics card when it is actually running. performance. Then, at runtime I saw that the dedicated GPU memory utilization can reach 70%, but the shared GPU memory is completely unused. data:RTX3070 Utilization:7% Dedicated GPU memory: 5.5/8.0G Shared GPU memory: 0.1/8.0G GPU memory: 5.6/16G 2U7$UICDQZNT}JHLWZPZKX I hope to increase the use of shared GPU memory during runtime calls, maximize the performance of the graphics card, and speed up the rendering efficiency and effect.

Steps to reproduce the problem

run normally, then query Task Explorer.

What should have happened?

When the program runs normally and the utilization rate of the existing dedicated GPU remains unchanged,but the acceleration function of shared GPU memory can be realized. Maximize the utilization of graphics cards. Emphasize and speed up cuda program operation by starting shared GPU memory.

Commit where the problem happens

INCOMPATIBLE PYTHON VERSION This program is tested with 3.10.6 Python, but you have 3.11.1.

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off
set PYTHON=F:\NovelAI\stable-diffusion-webui\Python3.10\python.exe
set GIT=F:\NovelAI\stable-diffusion-webui\Git\mingw64\libexec\git-core\git.exe
set COMMANDLINE_ARGS=--autolaunch --xformers  --deepdanbooru
set GIT_PYTHON_REFRESH=quiet
set TRANSFORMERS_CACHE=F:\NovelAI\stable-diffusion-webui\deploy\.cache\huggingface\transformers
set HUGGINGFACE_HUB_CACHE=F:\NovelAI\stable-diffusion-webui\deploy\.cache\huggingface\hub
%python% launch.py
pause

List of extensions

No

Console logs

F:\NovelAI\stable-diffusion-webui>python launch.py ============================================================================================================== INCOMPATIBLE PYTHON VERSION  This program is tested with 3.10.6 Python, but you have 3.11.1. If you encounter an error with "RuntimeError: Couldn't install torch." message, or any other error regarding unsuccessful package (library) installation, please downgrade (or upgrade) to the latest version of 3.10 Python and delete current Python and "venv" folder in WebUI's directory.  You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/  Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases  Use --skip-python-version-check to suppress this warning. ============================================================================================================== Python版本 3.11.1 (tags/v3.11.1:a7a450f, Dec  6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] Commit hash: 226d840e84c5f306350b0681945989b86760e616

Additional information

No response

mayjack0312 avatar Feb 18 '23 17:02 mayjack0312

gpu memory is being used since by default model gets loaded on startup and kept in gpu memory. there are plenty of options to swap model out of gpu memory if you want to free it (search for medvram and similar).

and regarding gpu utilization, thats up to model itself, not much webui can do about it. run with higher batch size and your gpu utilization will go up.

all in all, i don't see an issue, its just how things work.

vladmandic avatar Feb 19 '23 00:02 vladmandic

gpu memory is being used since by default model gets loaded on startup and kept in gpu memory. there are plenty of options to swap model out of gpu memory if you want to free it (search for medvram and similar).

and regarding gpu utilization, thats up to model itself, not much webui can do about it. run with higher batch size and your gpu utilization will go up.

all in all, i don't see an issue, its just how things work.

I mean that dedicated GPU memory and shared GPU memory can be shared, and the data exchange of dedicated memory has been done very well. I have tested 30 sets of models, but this problem has occurred

mayjack0312 avatar Feb 19 '23 03:02 mayjack0312

Based on the description, I hypothesized that maybe Task Manager is displaying graphics utilization instead of CUDA utilization - and this is apparently known to be the case: https://michaelceber.medium.com/gpu-monitoring-on-windows-10-for-machine-learning-cuda-41088de86d65

"Shared GPU Memory" is virtual memory sliced out of system RAM. It does not reside on the GPU and you don't want this process using it.

DejitaruJin avatar Feb 19 '23 03:02 DejitaruJin

Based on the description, I hypothesized that maybe Task Manager is displaying graphics utilization instead of CUDA utilization - and this is apparently known to be the case: https://michaelceber.medium.com/gpu-monitoring-on-windows-10-for-machine-learning-cuda-41088de86d65

"Shared GPU Memory" is virtual memory sliced out of system RAM. It does not reside on the GPU and you don't want this process using it.

Thank you for your answer. I'll go and have a look

mayjack0312 avatar Feb 19 '23 03:02 mayjack0312