stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

[BUG] Python version issues lead to runtime errors

Open jthwxxzyt opened this issue 4 months ago • 3 comments

When running stablediffusion, the prompt:

/content/stable-diffusion-webui-forge Already up to date.

INCOMPATIBLE PYTHON VERSION

This program is tested with 3.10.6 Python, but you have 3.12.11. If you encounter an error with "RuntimeError: Couldn't install torch." message, or any other error regarding unsuccessful package (library) installation, please downgrade (or upgrade) to the latest version of 3.10 Python and delete current Python and "venv" folder in WebUI's directory.

You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/

Use --skip-python-version-check to suppress this warning.

Python 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] Version: f2.0.1v1.10.1-previous-669-gdfdcbab6

Then you can use stablediffusion, but when running the graph, an error will be reported after more than ten minutes.:

Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} [GPU Setting] You will use 93.22% GPU memory (14071.00 MB) to load weights, and use 6.78% GPU memory (1024.00 MB) to do matrix computation. ^C

The running program is then terminated

When I change the version to 3.10.6, an error will be reported: ValueError: Key backend: 'module://matplotlib_inline.backend_inline' is not a valid value for backend; supported values are ['gtk3agg', 'gtk3cairo', 'gtk4agg', 'gtk4cairo', 'macosx', 'nbagg', 'notebook', 'qtagg', 'qtcairo', 'qt5agg', 'qt5cairo', 'tkagg', 'tkcairo', 'webagg', 'wx', 'wxagg', 'wxcairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template']

And when "Hires. fix" and "ADetailer" are enabled, an error will also be reported.

jthwxxzyt avatar Aug 24 '25 19:08 jthwxxzyt

By the way, I'm using Google colab to run it

jthwxxzyt avatar Aug 24 '25 19:08 jthwxxzyt

Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False} [GPU Setting] You will use 93.22% GPU memory (14071.00 MB) to load weights, and use 6.78% GPU memory (1024.00 MB) to do matrix computation. ^C

This isnt an error, this is just Forge telling you that there was a change in its GPU memory usage setup. You can tweak this with the GPU Weights slider at the top of the UI. It'll print this at least once a session. The ^C implies the app was stopped with keyboard input or whatever the Colab equivalent is. I don't know anything about Colab and changing python versions so i'm not 100% sure what's happening when you downgrade to 3.10.6

If possible, try a fresh session using the latest version of Python 3.10 or even 3.11, and adjust the GPU Weights slider after startup to give yourself ~2-3GB of Inference memory instead of using most of it for GPU weights.

MisterChief95 avatar Aug 26 '25 16:08 MisterChief95

Having the same Issue. Tried restoring Python version to 3.11 as a workaround but it doesn't seem to work either.

Wootbloot avatar Aug 27 '25 18:08 Wootbloot