stable-diffusion-webui
stable-diffusion-webui copied to clipboard
Update torch and other dependencies to make it work on 24.04
Description
Ubuntu 24.04 comes with python 3.12 and a newer rustc. This requires to update torch to 2.2+ which is available for py3.12 and also transformers to 4.34+ since only then tokenizers is new enough to build with the provided rustc.
Screenshots/videos:
Checklist:
- [x] I have read contributing wiki page
- [x] I have performed a self-review of my own code
- [x] My code follows the style guidelines
- [x] My code passes tests
you forgot AMD changes
I tried this out on amdgpu on arch linux and it worked fine actually :+1: I just had to set pytorch_lightning back to 1.9.4 in requirements_versions.txt
@HinaHyugaHime what do you mean by amd changes? I would be glad to provide them.
@HinaHyugaHime what do you mean by amd changes? I would be glad to provide them.
Torch version
CLIPTextModel_from_pretrained change should not be there. This code is to prevent loading CLIP model from the web as its weights are already included into the checkpoint, and removing None disables that.
As for the rest, I'm generally against updating versions without an explicit need for it... Does this all work on Windows with recommended python 3.10.6?
@janbernloehr I finally fixed Torch hell for MacOS with this PR, so please do not modify webui-macos-env.sh when you are fixing problems for other OS. Intel Macs must stay on 2.1.2 until they stop working compleately.
Have you tried just to change python and torch in webui-env.sh without moddifing anything else?
Something like:
python_cmd="python3.12"
export TORCH_COMMAND="pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu121"
If that works I have another suggestion.
As for the rest, I'm generally against updating versions without an explicit need for it... Does this all work on Windows with recommended python 3.10.6?
@AUTOMATIC1111 If what I proposed above works, I would suggest similar approach as we have for MacOS.
Something like:
if [[ "${OSTYPE}" == "linux"* ]] && some_other_check; then
if [[ -f "$SCRIPT_DIR"/webui-new-linux-env.sh ]]
then
source "$SCRIPT_DIR"/webui-new-linux-env.sh
fi
fi
I am not sure what some_other_check should be. Something like [[ $(grep -c "Ubuntu 24" /etc/issue) -ne 0 ]] would work in this case, but it should be some more generic check.
@janbernloehr I managed to run A1111 with 3.12 on my Mac with just a few minor changes. I only have Ubuntu 20.04.6 (on a server without GPU) so I can't test if this work on 24.x with Nvidia.
I only changed this:
webui-user sh
# python3 executable
- #python_cmd="python3"
+ python_cmd="python3.12"
# install command for torch
- #export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113"
+ export TORCH_COMMAND="pip install torch==2.3.0 torchvision==0.18.0"
requirements_versions.txt
- transformers==4.30.2
+ transformers==4.41.2
I haven't changed requirements.txt, since it is for colab users.
And I used ./webui.sh --skip-python-version-check to suppress version warnings.
Thats all.
After I remove venv, on the first run I am always getting this error, even setuptools==69.5.1 exists in :
from distutils.version import StrictVersion
ModuleNotFoundError: No module named 'distutils
But if I just rerun ./webui again it works π€·π»ββοΈ
I am getting the error below every time, since I haven't replace None with pretrained_model_name_or_path in CLIPTextModel_from_pretrained:
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
Failed to create model quickly; will retry using slow method.
Otherwise it works fine as far as I can tell.
@AUTOMATIC1111 If you don't mind, I will reopen https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13667 with a steps to reproduce and a note that I noticed that problem only with 3.12 and not with 3.10.
@janbernloehr that is the reason I am not using the pretrained_model_name_or_path patch you used.
Slow method works just fine:
Applying attention optimization: sub-quadratic... done.
Model loaded in 8.4s (load weights from disk: 0.3s, create model: 1.7s, apply weights to model: 5.8s, apply half(): 0.2s, calculate empty prompt: 0.2s).
All test passed:
33 passed, 60 warnings in 74.66s (0:01:14)
and basic generation works fine (I just did some basic tests):
Steps: 20, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 7, Seed: 1341933258, Size: 512x512, Model hash: 6ce0161689, Model: v1-5-pruned-emaonly, Version: v1.9.4
Time taken: 11.3 sec.
version: v1.9.4 ββ’β python: 3.12.4 ββ’β torch: 2.3.0 ββ’β xformers: N/A ββ’β gradio: 3.41.2 ββ’β checkpoint: 6ce0161689
@viking1304 : thanks for your input - indeed this is very minimal changes! My intention of this PR was to fix all the weird things too so thatβs why I in the end updated a lot more deps. But I see that this might cause some unforeseen problems for some users.
@janbernloehr
EDIT: I had to completely change my message since this issue was brought to my attention last night:
PyTorch support for Python 3.12 in general is considered experimental. Please use Python version between 3.8 and 3.11 instead. This is an existing issue since PyTorch 2.2.
from https://github.com/pytorch/pytorch/releases/tag/v2.3.0
So, it would be better to find another solution instead of trying to make a1111 run on Python 3.12 since PyTorch, which is the most important package, does not work with 3.12 properly.
Can you please try this?
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10
On the last manjaro it's needed now to install python311 (don not confuse with python3.11) from yay, because the system dropped python 3.11 support. So this PR is useful not only for ubuntu users
Thank you for the pull request, I was able to use it to make everything work and was able to reproduce a previous generation on my Ubuntu 24.04 install.
@light-and-ray @faattori
This PR changes too many unnecessary things and might compromise other systems.
What I did here is enough and do not break anything.
Torch only partially works on 3.12, so some things might not work.
@AUTOMATIC1111 since 3.10 can be installed from deadsnakes/ppa now on Ubuntu 24, I would suggest closing this PR and putting a note that users can install 3.10 like this:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10
@janbernloehr @faattori, can you confirm that you can install 3.10 like this?
So far I haven't encountered anything that would not work on python3.12 regarding torch, so I am going to keep on experimenting and see when and what breaks.
But so far I have no reason to install python3.10.
@light-and-ray @faattori
This PR changes too many unnecessary things and might compromise other systems.
What I did here is enough and do not break anything.
Torch only partially works on 3.12, so some things might not work.
@AUTOMATIC1111 since 3.10 can be installed from deadsnakes/ppa now on Ubuntu 24, I would suggest closing this PR and putting a note that users can install 3.10 like this:
sudo add-apt-repository ppa:deadsnakes/ppa sudo apt update sudo apt install python3.10@janbernloehr @faattori, can you confirm that you can install 3.10 like this?
So I'm running a fresh install of Linux Mint 22 based on Ubuntu 24.04. I couldn't remove python 3.12. To get A1111 installed i had to do the following:
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.10 python3.10-venv
python3.10 -m ensurepip --upgrade
then change line 47 in webui.sh to point to python3.10 rather than python3
A1111 is now installed and I've been able to generate an image. Im gonna keep going and see if I encounter any othr issue tomorrow.
I would like to point out that PyTorch 2.4, which released some 3 weeks ago, DOES fully support Python 3.12 now.
2.4 also supports CUDA 12.4 instead of 12.1, so that'd be a nice target.
As a result, the cu121 repo needs to be cu124 instead.
Side note: Upcoming 2.4.1 seems to support CUDA 12.5
With pydantic 1.10.16 I run into following issue with this MR:
File "~/stable-diffusion-webui/venv/lib/python3.12/site-packages/pydantic/typing.py", line 66, in evaluate_forwardref
return cast(Any, type_)._evaluate(globalns, localns, set())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
However adjusting the version from pydantic to 1.10.18 seems to fix that. So maybe this could be adjusted before getting merged? (Tested on Archlinux with Python 3.12.5)