stable-diffusion-webui-docker
stable-diffusion-webui-docker copied to clipboard
ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports'
Has this issue been opened before?
Describe the bug
I think this is just me. But running into an issue and hoping someone else knows how to fix it. Can't find the issue online. Anyway. Stable diffusion web-ui has been working perfectly on my machine for ages, but recently tried to start it again and am getting stuck with this error:
webui-docker-invoke-1 | Traceback (most recent call last):
webui-docker-invoke-1 | File "/opt/conda/bin/invokeai-configure", line 5, in <module>
webui-docker-invoke-1 | from ldm.invoke.config.invokeai_configure import main
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 40, in <module>
webui-docker-invoke-1 | from ..args import PRECISION_CHOICES, Args
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/args.py", line 100, in <module>
webui-docker-invoke-1 | from ldm.invoke.conditioning import split_weighted_subprompts
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/conditioning.py", line 18, in <module>
webui-docker-invoke-1 | from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/generator/__init__.py", line 4, in <module>
webui-docker-invoke-1 | from .base import Generator
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/generator/base.py", line 21, in <module>
webui-docker-invoke-1 | from pytorch_lightning import seed_everything
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
webui-docker-invoke-1 | from pytorch_lightning.callbacks import Callback # noqa: E402
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 25, in <module>
webui-docker-invoke-1 | from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/__init__.py", line 22, in <module>
webui-docker-invoke-1 | from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar # noqa: F401
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in <module>
webui-docker-invoke-1 | from torchmetrics.utilities.imports import _compare_version
webui-docker-invoke-1 | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
Which UI
invoke
Hardware / Software
- OS: Linux Mint
- OS version: 21.1
- WSL version (if applicable):
- Docker Version: 24.0.6
- Docker compose version: v2.21.0
- Repo version: master
6a34739135eb112667f00943c1fac98ab294716a - RAM: 48GB
- GPU/VRAM: nvidia 2060
Steps to Reproduce
docker compose --profile download up --build
docker compose --profile invoke up --build
Additional context
I already made a fresh clone and cleaned my docker containers (docker system prune -a). So this is a completely fresh build.
Anyone run into this? Any pointers?
Over at invoke ai I found something about installing torchmetrics v0.11.4 (https://github.com/invoke-ai/InvokeAI/issues/3658). Is this something I can configure with an env var?
Yup just faced the exact same issue with the Invoke profile. Changed to auto profile for now - which works.
me,too.
Same here. Steps to reproduce:
git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
cd stable-diffusion-webui-docker/
docker compose --profile download up --build
docker compose --profile invoke up --build
Using a Amazon g5.xlarge instance (NVIDIA A10G Tensor-Core-GPU) with an EC2 Deep Learning Base GPU AMI (Ubuntu 20.04) 20231026 (ami-0d134e01570c1e7b4)
$ docker --version
Docker version 24.0.6, build ed223bc
$ uname -a
Linux ip-xxx-xxx-xxx-xxx 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:44:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
I've also tried git checkout tags/8.1.0, but got the same error
:~/stable-diffusion-webui-docker$ docker compose --profile invoke up --build
[+] Building 0.7s (17/17) FINISHED docker:default
=> [invoke internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.99kB 0.0s
=> [invoke internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [invoke internal] load metadata for docker.io/pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime 0.6s
=> [invoke internal] load metadata for docker.io/library/alpine:3.17 0.7s
=> [invoke stage-1 1/8] FROM docker.io/pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime@sha256:82e0d379a5dedd6303c89eda57bcc434c40be11f249ddfadfd5673b84 0.0s
=> [invoke internal] load build context 0.0s
=> => transferring context: 65B 0.0s
=> [invoke xformers 1/3] FROM docker.io/library/alpine:3.17@sha256:f71a5f071694a785e064f05fed657bf8277f1b2113a8ed70c90ad486d6ee54dc 0.0s
=> CACHED [invoke stage-1 2/8] RUN --mount=type=cache,target=/var/cache/apt apt-get update && apt-get install make g++ git libopencv-dev -y && 0.0s
=> CACHED [invoke stage-1 3/8] RUN git clone https://github.com/invoke-ai/InvokeAI.git /InvokeAI 0.0s
=> CACHED [invoke stage-1 4/8] WORKDIR /InvokeAI 0.0s
=> CACHED [invoke stage-1 5/8] RUN --mount=type=cache,target=/root/.cache/pip git reset --hard f3b2e02921927d9317255b1c3811f47bd40a2bf9 && pip in 0.0s
=> CACHED [invoke stage-1 6/8] RUN --mount=type=cache,target=/root/.cache/pip git fetch && git reset --hard && git checkout main && git reset 0.0s
=> CACHED [invoke xformers 2/3] RUN apk add --no-cache aria2 0.0s
=> CACHED [invoke xformers 3/3] RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/ 0.0s
=> CACHED [invoke stage-1 7/8] RUN --mount=type=cache,target=/root/.cache/pip --mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0. 0.0s
=> CACHED [invoke stage-1 8/8] COPY . /docker/ 0.0s
=> [invoke] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:effc0d511e7589ea6981692f8685c58396379348bbc89cd8adac14bb4191848d 0.0s
=> => naming to docker.io/library/sd-invoke:30 0.0s
[+] Running 1/0
✔ Container webui-docker-invoke-1 Created 0.0s
Attaching to webui-docker-invoke-1
webui-docker-invoke-1 | Mounted ldm
webui-docker-invoke-1 | Mounted .cache
webui-docker-invoke-1 | Mounted RealESRGAN
webui-docker-invoke-1 | Mounted Codeformer
webui-docker-invoke-1 | Mounted GFPGAN
webui-docker-invoke-1 | Mounted GFPGANv1.4.pth
webui-docker-invoke-1 | Loading Python libraries...
webui-docker-invoke-1 |
webui-docker-invoke-1 | Traceback (most recent call last):
webui-docker-invoke-1 | File "/opt/conda/bin/invokeai-configure", line 5, in <module>
webui-docker-invoke-1 | from ldm.invoke.config.invokeai_configure import main
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 40, in <module>
webui-docker-invoke-1 | from ..args import PRECISION_CHOICES, Args
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/args.py", line 100, in <module>
webui-docker-invoke-1 | from ldm.invoke.conditioning import split_weighted_subprompts
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/conditioning.py", line 18, in <module>
webui-docker-invoke-1 | from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/generator/__init__.py", line 4, in <module>
webui-docker-invoke-1 | from .base import Generator
webui-docker-invoke-1 | File "/InvokeAI/ldm/invoke/generator/base.py", line 21, in <module>
webui-docker-invoke-1 | from pytorch_lightning import seed_everything
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
webui-docker-invoke-1 | from pytorch_lightning.callbacks import Callback # noqa: E402
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 25, in <module>
webui-docker-invoke-1 | from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/__init__.py", line 22, in <module>
webui-docker-invoke-1 | from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar # noqa: F401
webui-docker-invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in <module>
webui-docker-invoke-1 | from torchmetrics.utilities.imports import _compare_version
webui-docker-invoke-1 | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
webui-docker-invoke-1 exited with code 1
:~/stable-diffusion-webui-docker$
#596
Hmm, if I understand this correctly, this error should not be happening anymore, right?
Well, I just ran into it with a fresh install today.
Same here with invoke
Ditto here on a local install following instructions. And the exact same procedure described above to reproduce it
Anyone still getting this error - you can modify the dockerfile at \stable-diffusion-webui-docker\services\invoke\Dockerfile by adding a line: WORKDIR ${ROOT}
- RUN pip install torchmetrics==0.11.4
RUN --mount=type=cache,target=/root/.cache/pip
git reset --hard f3b2e02921927d9317255b1c3811f47bd40a2bf9 &&
pip install -e .
- RUN pip install torchmetrics==0.11.4
Brilliant - thanks!
But now I'm facing another error: cannot import name 'ModelSearchArguments' from 'huggingface_hub'
- RUN pip install torchmetrics==0.11.4
same here, did you solve it?
Yup just faced the exact same issue with the Invoke profile. Changed to auto profile for now - which works.
Hi, I wonder if the already downloaded files will be removed automaticlly whenI switch "invoke" to "auto", because it is kinda waste of space since I would never use "invoke" again. Or how should I removed the files that related to the "invoke" mode. THX
Describe the bug
[+] Running 1/1
✔ Container webui-docker-invoke-1 Created 0.1s
Attaching to invoke-1
invoke-1 | mkdir: created directory '/data/.cache/invoke'
invoke-1 | mkdir: created directory '/data/.cache/invoke/ldm/'
invoke-1 | Mounted ldm
invoke-1 | Mounted .cache
invoke-1 | Mounted RealESRGAN
invoke-1 | mkdir: created directory '/data/models/Codeformer/'
invoke-1 | Mounted Codeformer
invoke-1 | Mounted GFPGAN
invoke-1 | Mounted GFPGANv1.4.pth
invoke-1 | Loading Python libraries...
invoke-1 |
invoke-1 | Traceback (most recent call last):
invoke-1 | File "/opt/conda/bin/invokeai-configure", line 5, in <module>
invoke-1 | from ldm.invoke.config.invokeai_configure import main
invoke-1 | File "/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 40, in <module>
invoke-1 | from ..args import PRECISION_CHOICES, Args
invoke-1 | File "/InvokeAI/ldm/invoke/args.py", line 100, in <module>
invoke-1 | from ldm.invoke.conditioning import split_weighted_subprompts
invoke-1 | File "/InvokeAI/ldm/invoke/conditioning.py", line 18, in <module>
invoke-1 | from .generator.diffusers_pipeline import StableDiffusionGeneratorPipeline
invoke-1 | File "/InvokeAI/ldm/invoke/generator/__init__.py", line 4, in <module>
invoke-1 | from .base import Generator
invoke-1 | File "/InvokeAI/ldm/invoke/generator/base.py", line 21, in <module>
invoke-1 | from pytorch_lightning import seed_everything
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 34, in <module>
invoke-1 | from pytorch_lightning.callbacks import Callback # noqa: E402
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 25, in <module>
invoke-1 | from pytorch_lightning.callbacks.progress import ProgressBarBase, RichProgressBar, TQDMProgressBar
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/__init__.py", line 22, in <module>
invoke-1 | from pytorch_lightning.callbacks.progress.rich_progress import RichProgressBar # noqa: F401
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/progress/rich_progress.py", line 20, in <module>
invoke-1 | from torchmetrics.utilities.imports import _compare_version
invoke-1 | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
invoke-1 exited with code 1
Which UI
invoke
Hardware / Software
-
OS: Windows 11 Pro
-
OS version: Version 23H2 (OS Build 22631.3296)
-
WSL version NAME STATE VERSION
- docker-desktop Running 2 docker-desktop-data Running 2
-
Docker Version: Client: Cloud integration: v1.0.35+desktop.11 Version: 25.0.3 API version: 1.44 Go version: go1.21.6 Git commit: 4debf41 Built: Tue Feb 6 21:13:02 2024 OS/Arch: windows/amd64 Context: default Server: Docker Desktop 4.28.0 (139021) Engine: Version: 25.0.3 API version: 1.44 (minimum version 1.24) Go version: go1.21.6 Git commit: f417435 Built: Tue Feb 6 21:14:25 2024 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.28 GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb runc: Version: 1.1.12 GitCommit: v1.1.12-0-g51d5e94 docker-init: Version: 0.19.0 GitCommit: de40ad0
-
Docker compose version: Docker Compose version v2.24.6-desktop.1
-
Repo version: 1.2.0
-
RAM: 64GB
-
GPU/VRAM: RTX 4090/24GB
Steps to Reproduce
- run setup commands"
docker compose --profile download up --build
docker compose --profile invoke up --build
- Error:
invoke-1 | ImportError: cannot import name '_compare_version' from 'torchmetrics.utilities.imports' (/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py)
invoke-1 exited with code 1
- Container runs for 47 seconds and stops
Additional context new install with
git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
- RUN pip install torchmetrics==0.11.4
Brilliant - thanks!
But now I'm facing another error: cannot import name 'ModelSearchArguments' from 'huggingface_hub'
huggingface_hub removed 'ModelSearchArguments' from version v0.19, you can remove the current version and re-install v0.18.0 or below, please note that you also need to lower the version of transformers to v4.35. Otherwise you will get "AttributeError: module 'huggingface_hub.constants' has no attribute 'HF_HUB_CACHE'" error again.
I solved this issue by adding :
RUN --mount=type=cache,target=/root/.cache/pip \
pip uninstall -y torchmetrics && \
pip install torchmetrics==0.11.4 && \
pip uninstall -y huggingface-hub && \
pip install huggingface-hub==0.18.0 && \
pip uninstall -y transformers && \
pip install transformers==4.35.2
Welp @mynameiskeen , your changes seems to almost solve the problem in my case 😅 now I got this:
invoke-1 | * --web was specified, starting web server...
invoke-1 | Traceback (most recent call last):
invoke-1 | File "/opt/conda/bin/invokeai", line 8, in <module>
invoke-1 | sys.exit(main())
invoke-1 | File "/InvokeAI/ldm/invoke/CLI.py", line 184, in main
invoke-1 | invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan)
invoke-1 | File "/InvokeAI/ldm/invoke/CLI.py", line 1078, in invoke_ai_web_server_loop
invoke-1 | from invokeai.backend import InvokeAIWebServer
invoke-1 | File "/InvokeAI/invokeai/backend/__init__.py", line 4, in <module>
invoke-1 | from .invoke_ai_web_server import InvokeAIWebServer
invoke-1 | File "/InvokeAI/invokeai/backend/invoke_ai_web_server.py", line 17, in <module>
invoke-1 | from flask import Flask, redirect, send_from_directory, request, make_response
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/__init__.py", line 7, in <module>
invoke-1 | from .app import Flask as Flask
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 27, in <module>
invoke-1 | from . import cli
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/cli.py", line 17, in <module>
invoke-1 | from .helpers import get_debug_flag
invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/helpers.py", line 14, in <module>
invoke-1 | from werkzeug.urls import url_quote
invoke-1 | ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/opt/conda/lib/python3.10/site-packages/werkzeug/urls.py)
invoke-1 | Exception ignored in atexit callback: <built-in function write_history_file>
invoke-1 | FileNotFoundError: [Errno 2] No such file or directory
invoke-1 exited with code 1
Welp @mynameiskeen , your changes seems to almost solve the problem in my case 😅 now I got this:
invoke-1 | * --web was specified, starting web server... invoke-1 | Traceback (most recent call last): invoke-1 | File "/opt/conda/bin/invokeai", line 8, in <module> invoke-1 | sys.exit(main()) invoke-1 | File "/InvokeAI/ldm/invoke/CLI.py", line 184, in main invoke-1 | invoke_ai_web_server_loop(gen, gfpgan, codeformer, esrgan) invoke-1 | File "/InvokeAI/ldm/invoke/CLI.py", line 1078, in invoke_ai_web_server_loop invoke-1 | from invokeai.backend import InvokeAIWebServer invoke-1 | File "/InvokeAI/invokeai/backend/__init__.py", line 4, in <module> invoke-1 | from .invoke_ai_web_server import InvokeAIWebServer invoke-1 | File "/InvokeAI/invokeai/backend/invoke_ai_web_server.py", line 17, in <module> invoke-1 | from flask import Flask, redirect, send_from_directory, request, make_response invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/__init__.py", line 7, in <module> invoke-1 | from .app import Flask as Flask invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 27, in <module> invoke-1 | from . import cli invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/cli.py", line 17, in <module> invoke-1 | from .helpers import get_debug_flag invoke-1 | File "/opt/conda/lib/python3.10/site-packages/flask/helpers.py", line 14, in <module> invoke-1 | from werkzeug.urls import url_quote invoke-1 | ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/opt/conda/lib/python3.10/site-packages/werkzeug/urls.py) invoke-1 | Exception ignored in atexit callback: <built-in function write_history_file> invoke-1 | FileNotFoundError: [Errno 2] No such file or directory invoke-1 exited with code 1
Try add this:
pip uninstall -y Werkzeug &&
pip install Werkzeug==2.2.2