/content/stable-diffusion-webui
Already up to date.
The following values were not passed to accelerate launch
and had defaults used instead:
--num_processes
was set to a value of 1
--num_machines
was set to a value of 1
--mixed_precision
was set to a value of 'no'
--dynamo_backend
was set to a value of 'no'
To avoid this warning pass in values for each of the problematic parameters or run accelerate config
.
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0]
Commit hash: 72cd27a13587c9579942577e9e3880778be195f6
Installing requirements
Launching Web UI with arguments: --xformers --no-half-vae --share --gradio-queue --styles-file /content/data/config/styles.csv
Traceback (most recent call last):
File "/content/stable-diffusion-webui/launch.py", line 353, in
start()
File "/content/stable-diffusion-webui/launch.py", line 344, in start
import webui
File "/content/stable-diffusion-webui/webui.py", line 22, in
import pytorch_lightning # pytorch_lightning should be imported after torch, but it re-enables warnings on import so import once to disable them
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/init.py", line 35, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/init.py", line 14, in
from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/batch_size_finder.py", line 24, in
from pytorch_lightning.callbacks.callback import Callback
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 25, in
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "/opt/conda/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 27, in
from torchmetrics import Metric
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/init.py", line 14, in
from torchmetrics import functional # noqa: E402
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/functional/init.py", line 14, in
from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/init.py", line 14, in
from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate # noqa: F401
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py", line 22, in
from torchmetrics.utilities.imports import _SCIPY_AVAILABLE
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/init.py", line 1, in
from torchmetrics.utilities.checks import check_forward_full_state_property # noqa: F401
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/checks.py", line 25, in
from torchmetrics.utilities.data import select_topk, to_onehot
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/data.py", line 19, in
from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py", line 112, in
_TORCHVISION_GREATER_EQUAL_0_8: Optional[bool] = _compare_version("torchvision", operator.ge, "0.8.0")
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py", line 78, in _compare_version
if not _module_available(package):
File "/opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py", line 59, in _module_available
module = import_module(module_names[0])
File "/opt/conda/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/opt/conda/lib/python3.10/site-packages/torchvision/init.py", line 6, in
from torchvision import datasets, io, models, ops, transforms, utils
File "/opt/conda/lib/python3.10/site-packages/torchvision/datasets/init.py", line 1, in
from ._optical_flow import FlyingChairs, FlyingThings3D, HD1K, KittiFlow, Sintel
File "/opt/conda/lib/python3.10/site-packages/torchvision/datasets/_optical_flow.py", line 12, in
from ..io.image import _read_png_16
File "/opt/conda/lib/python3.10/site-packages/torchvision/io/init.py", line 8, in
from ._load_gpu_decoder import _HAS_GPU_VIDEO_DECODER
File "/opt/conda/lib/python3.10/site-packages/torchvision/io/_load_gpu_decoder.py", line 1, in
from ..extension import _load_library
File "/opt/conda/lib/python3.10/site-packages/torchvision/extension.py", line 107, in
_check_cuda_version()
File "/opt/conda/lib/python3.10/site-packages/torchvision/extension.py", line 80, in _check_cuda_version
raise RuntimeError(
RuntimeError: Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.7 and torchvision has CUDA Version=11.8. Please reinstall the torchvision that matches your PyTorch install.
Traceback (most recent call last):
File "/opt/conda/bin/accelerate", line 8, in
sys.exit(main())
File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 923, in launch_command
simple_launcher(args)
File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 579, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', 'launch.py']' returned non-zero exit status 1.
try torch-1.13.1+cu117 torchvision-0.14.1+cu117
Can you please provide a detailed explanation on what I should specifically do?
You got this error because of Automatic1111 has been update their code for AMD GPU that work with torch 2.0.0 and in this github colab use older torch and torchvision
Try
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117
or if it still not work, try my clone repo instead of Automatic1111
https://github.com/hamnv/stable-diffusion-webui.git
Try pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117
Where would someone put this in the colab?
You got this error because of Automatic1111 has been update their code for AMD GPU that work with torch 2.0.0 and in this github colab use older torch and torchvision Try pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 or if it still not work, try my clone repo instead of Automatic1111 https://github.com/hamnv/stable-diffusion-webui.git
Where would I put that code??
works with https://github.com/hamnv/stable-diffusion-webui.git
Try pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117
Where would someone put this in the colab?
Hi @tenrandomdigits , have you solved this problem?