stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Reinstalling Torch and Xformers broke everything

Open RaymondTracer opened this issue 2 years ago • 50 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I launched with --reinstall-xformers and --reinstall-torch, and now it won't generate images.

It gives me this error message while launching: Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled

and this one when I try to generate images: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:997)

Steps to reproduce the problem

  1. Open webui-user.bat in notepad
  2. Add "--reinstall-xformers --reinstall-torch" to command line args.
  3. Everything breaks

What should have happened?

Torch and Xformers should've updated and nothing bad happens.

Commit where the problem happens

7ff1ef77dd22f7b38612f91b389237a5dbef2474

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--xformers --autolaunch --api --no-half-vae --no-half --gradio-queue --medvram --precision full --reinstall-torch --reinstall-xformers

Additional information, context and logs

Full log of latest launch, with minimal command line args:

C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111>webui-user.bat
venv "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\Scripts\Python.exe"
Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep  5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
Commit hash: 171a5b3bb9eb06ebbd4a16c293fda5ce2a7fa462
Installing requirements for Web UI
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled


Error running install.py for extension extensions\sd-webui-riffusion.
Command: "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\Scripts\python.exe" "extensions\sd-webui-riffusion\install.py"
Error code: 1
stdout: Initializing Riffusion
[Riffusion] Installing torchaudio...

stderr: Traceback (most recent call last):
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\extensions\sd-webui-riffusion\install.py", line 29, in <module>
    run(
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\launch.py", line 65, in run
    raise RuntimeError(message)
RuntimeError: [Riffusion] Couldn't install torchaudio..
Command: "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\Scripts\python.exe" -m pip install torchaudio==0.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
Error code: 1
stdout: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com, https://download.pytorch.org/whl/cu113
Requirement already satisfied: torchaudio==0.12.1+cu113 in c:\users\user\documents\github\stable-diffusion-webui_automatic1111\venv\lib\site-packages (0.12.1+cu113)
Collecting torch==1.12.1
  Downloading https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-win_amd64.whl (2143.8 MB)
     ---------------------------------------- 2.1/2.1 GB 7.7 MB/s eta 0:00:00
Requirement already satisfied: typing-extensions in c:\users\user\documents\github\stable-diffusion-webui_automatic1111\venv\lib\site-packages (from torch==1.12.1->torchaudio==0.12.1+cu113) (4.3.0)
Installing collected packages: torch
  Attempting uninstall: torch
    Found existing installation: torch 1.13.1
    Uninstalling torch-1.13.1:
      Successfully uninstalled torch-1.13.1

stderr: ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\Users\\user\\Documents\\GitHub\\stable-diffusion-webui_AUTOMATIC1111\\venv\\Lib\\site-packages\\~%rch\\lib\\asmjit.dll'
Check the permissions.









Launching Web UI with arguments: --autolaunch --api
C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: [WinError 127] The specified procedure could not be found
  warn(f"Failed to load image Python extension: {e}")
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.12.1+cu113.
The program is tested to work with torch 1.13.1.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded.
==============================================================================
Default key/cert pair was already generated by webui
Certificate trust store ready
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [67a115286b] from C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\models\Stable-diffusion\Anything V3.0.ckpt
Loading VAE weights found near the checkpoint: C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\models\Stable-diffusion\Anything V3.0.vae.pt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 45.7s (0.5s create model, 43.1s load weights).
Running with TLS
add tab
Running on local URL:  https://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
    response = await self.dispatch_func(request, call_next)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\modules\api\api.py", line 91, in log_and_time
    res: Response = await call_next(req)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\fastapi\routing.py", line 235, in app
    raw_response = await run_endpoint_function(
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\modules\api\api.py", line 255, in extras_single_image_api
    reqDict['image'] = decode_base64_to_image(reqDict['image'])
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\modules\api\api.py", line 56, in decode_base64_to_image
    return Image.open(BytesIO(base64.b64decode(encoding)))
  File "C:\Users\user\Documents\GitHub\stable-diffusion-webui_AUTOMATIC1111\venv\lib\site-packages\PIL\Image.py", line 3283, in open
    raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000001E6C2B36E80>```

RaymondTracer avatar Jan 23 '23 16:01 RaymondTracer

same here.

tommcg avatar Jan 23 '23 16:01 tommcg

I can generate images again after renaming the "venv" folder and letting it generate a new one.

Still get these errors though:

No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.12.1+cu113.
The program is tested to work with torch 1.13.1.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded.
==============================================================================

New errors upon second launch:

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 1.13.1+cu117 with CUDA 1107 (you have 1.12.1+cu113)
    Python  3.10.9 (you have 3.10.7)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'

RaymondTracer avatar Jan 23 '23 16:01 RaymondTracer

I concur - deleting venv doesn't help and these errors do pop up. ~~Additionally, while recreating venv by running webui-user.bat, the script deleted both bat files, namely webui-user.bat and webui.bat, which is weird.~~ I also had to update system Python from 3.10.6 to 3.10.9.

mart-hill avatar Jan 23 '23 16:01 mart-hill

weeeeird... but maybe that's weird for me because I've been modifying the launch.py script to load the cu117 pytorch since forever. And I have no clue why your xformers want 3.10.9 when my xformers run fine on 3.10.6

I mean, if you deleted venv, then it would reinstall Python, which is by default 3.10.6 which means that you must've modified the python version...

edit: maybe try this: reinstall global python to be 3.10.6, delete repositories and venv, get into Git bash, run "git checkout launch.py" and "git checkout webui.py" if these have been modded in any way.

DarkSolus avatar Jan 23 '23 17:01 DarkSolus

xformers work for me at 3.10.7, i did not modify anything previously.

mezotaken avatar Jan 23 '23 17:01 mezotaken

Here to report an error while installing torch and torchvision. I decided to reinstall the webui fresh, and I got an error in the initial setup with webui-user.bat . Waiting to try it again in a proper terminal so I can copy the error, but it had something to do with a wheel and a pip update.

antis0007 avatar Jan 23 '23 17:01 antis0007

I had Python 3.10.6 in my main env, and WebUI pulls it from there, right? Then, after using git pull, I used these two newly introduced --reinstall-torch --reinstall-xformers along with --update-check in webui-user.bat, and, after running that, I've got a notification, that the xformers (I think) weren't compiled for this Python version.

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 1.13.1+cu117 with CUDA 1107 (you have 1.13.1+cpu)
    Python  3.10.9 (you have 3.10.9)   **<-- Here, I had "(you have 3.10.6)".**
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.

I'm on Windows though. With Linux "part" now installed though.

Second try to run webui-user.bat gives me error, that CUDA doesn't exist (no wonder), and the script abruptly exits.

Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Commit hash: c6f20f72629f3c417f10db2289d131441c6832f5
Traceback (most recent call last):
  File "O:\AI\stable-diffusion-webui\launch.py", line 315, in <module>
    prepare_environment()
  File "O:\AI\stable-diffusion-webui\launch.py", line 227, in prepare_environment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "O:\AI\stable-diffusion-webui\launch.py", line 89, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "O:\AI\stable-diffusion-webui\launch.py", line 65, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "O:\AI\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Edit: Aaaha. Comodo Internet Security took the bat files as a risk, and the anti-ransomware part, VirusScope, wiped the batch files out. Trying again. :) Though, the part when I used the freshly introduced "reinstall" feature, still stands - error was the same for as as was for OP.

Also, I think, one of the extensions is wiping the torch installation partially, since venv went from around 7.6GB to 3.6GB in an instant. Edit: It might be Smartcrop one.

My webui-user.bat file looks like this. I redirect all the TEMP env. variables to a selected folder, to have control over the mess, which happens there after a while. :) It's the only file I modify.

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set TMP=X:\AI\TEMP
set TEMP=X:\AI\TEMP
set SAFETENSORS_FAST_GPU=1
set COMMANDLINE_ARGS=--reinstall-torch --reinstall-xformers --no-half-vae --api --deepdanbooru --xformers

call webui.bat

mart-hill avatar Jan 23 '23 17:01 mart-hill

Hi - I also had to reinstall xformers and torch upon request, and now have almost the same errors on Windows as a topic starter. Here they are:

`Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). ERROR: Exception in ASGI application Traceback (most recent call last): File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive return self.receive_nowait() File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait raise WouldBlock anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next message = await recv_stream.receive() File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in call await super().call(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in call await self.middleware_stack(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in call response = await self.dispatch_func(request, call_next) File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\api\api.py", line 91, in log_and_time res: Response = await call_next(req) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call await responder(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in call await self.app(scope, receive, self.send_with_gzip) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call await route.handle(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app raw_response = await run_endpoint_function( File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\api\api.py", line 255, in extras_single_image_api reqDict['image'] = decode_base64_to_image(reqDict['image']) File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\api\api.py", line 56, in decode_base64_to_image return Image.open(BytesIO(base64.b64decode(encoding))) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3283, in open raise UnidentifiedImageError(msg) PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000001D16A84CC20> `

lavalava45 avatar Jan 23 '23 18:01 lavalava45

Hi - I also had to reinstall xformers and torch upon request, and now have almost the same errors on Windows as a topic starter. Here they are:

`Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). ERROR: Exception in ASGI application Traceback (most recent call last): File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive return self.receive_nowait() File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait raise WouldBlock anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next message = await recv_stream.receive() File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in call await super().call(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in call await self.middleware_stack(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in call response = await self.dispatch_func(request, call_next) File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\api\api.py", line 91, in log_and_time res: Response = await call_next(req) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call await responder(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in call await self.app(scope, receive, self.send_with_gzip) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call await route.handle(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app raw_response = await run_endpoint_function( File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\api\api.py", line 255, in extras_single_image_api reqDict['image'] = decode_base64_to_image(reqDict['image']) File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\api\api.py", line 56, in decode_base64_to_image return Image.open(BytesIO(base64.b64decode(encoding))) File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 3283, in open raise UnidentifiedImageError(msg) PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000001D16A84CC20> `

I had the same problem, disabling the openOutpaint-webUI extension fixed it

CrazyKrow avatar Jan 23 '23 18:01 CrazyKrow

I has to delete the Smartprocess extension as well, since it was messing up the torch (wiping half of it), but now, WebUI struggles with two others during the "first" run, after recreating venv:

Installing pywin32
ERROR:root:Aesthetic Image Scorer: Unable to load Windows tagging script from tools directory
Traceback (most recent call last):
  File "O:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-image-scorer\scripts\image_scorer.py", line 26, in <module>
    from tools.add_tags import tag_files
  File "O:\AI\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-image-scorer\tools\add_tags.py", line 6, in <module>
    import pythoncom
  File "O:\AI\stable-diffusion-webui\venv\lib\site-packages\pythoncom.py", line 2, in <module>
    import pywintypes
ModuleNotFoundError: No module named 'pywintypes'
Error loading script: training_picker.py
Traceback (most recent call last):
  File "O:\AI\stable-diffusion-webui\modules\scripts.py", line 218, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "O:\AI\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "O:\AI\stable-diffusion-webui\extensions\training-picker\scripts\training_picker.py", line 16, in <module>
    from modules.ui import create_refresh_button, folder_symbol
ImportError: cannot import name 'folder_symbol' from 'modules.ui' (O:\AI\stable-diffusion-webui\modules\ui.py)

Should I just rrestart it and see, if it fixes itself? Pywin32 seems to be in venv's site-packages, along with win32.com package.

Edit: Yeah, after restart, it's just Training Picker extrension, oh well. :)

Error loading script: training_picker.py
Traceback (most recent call last):
  File "O:\AI\stable-diffusion-webui\modules\scripts.py", line 218, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "O:\AI\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
    exec(compiled, module.__dict__)
  File "O:\AI\stable-diffusion-webui\extensions\training-picker\scripts\training_picker.py", line 16, in <module>
    from modules.ui import create_refresh_button, folder_symbol
ImportError: cannot import name 'folder_symbol' from 'modules.ui' (O:\AI\stable-diffusion-webui\modules\ui.py)

That said, I wonder, if any training will be now possible with such changes in WebUI. 😅

mart-hill avatar Jan 23 '23 18:01 mart-hill

Had to go through the old steps for a 4090 (get rid of openoutpaint) now everything works.

bbecausereasonss avatar Jan 23 '23 20:01 bbecausereasonss

Launching Web UI with arguments: --xformers [WinError 127] The specified procedure could not be found WARNING:root:WARNING: [WinError 127] The specified procedure could not be found Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop

You are running xformers 0.0.14.dev. The program is tested to work with xformers 0.0.16rc425. To reinstall the desired version, run with commandline flag --reinstall-xformers.

Can't reinstall xformers, it just ignores the commandline flag. My setup was working before git updating to latest

ProfessorMorbius avatar Jan 23 '23 22:01 ProfessorMorbius

Add Hypernet-monkey patch to the group too. After removing that and everything else listed in this thread I at least get to have the webui start. I'm still get the reinstall error around xformers though so there must be some other thing involved as well. Running with --reinstall-xformers does nothing. I haven't upgraded python to 3.10.9 yet either, gonna wait for some real info before screwing around much more. Hopefully this is resolved shortly. This is what I get for git pullling for fancy colored text.

Launching Web UI with arguments: --xformers --no-half-vae --precision full --gradio-queue --listen --api --deepdanbooru --disable-safe-unpickle --no-half
C:\webui\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: [WinError 127] The specified procedure could not be found
  warn(f"Failed to load image Python extension: {e}")
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 1.13.1+cu117 with CUDA 1107 (you have 1.12.1+cu113)
    Python  3.10.9 (you have 3.10.6)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
==============================================================================
You are running torch 1.12.1+cu113.
The program is tested to work with torch 1.13.1.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded.
==============================================================================
C:\webui\venv\Scripts\python.exe
SD-Webui API layer loaded
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.

dicksensei69 avatar Jan 23 '23 23:01 dicksensei69

Launching Web UI with arguments: --xformers

[WinError 127] The specified procedure could not be found WARNING:root:WARNING: [WinError 127] The specified procedure could not be found Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop You are running xformers 0.0.14.dev. The program is tested to work with xformers 0.0.16rc425. To reinstall the desired version, run with commandline flag --reinstall-xformers.

Can't reinstall xformers, it just ignores the commandline flag. My setup was working before git updating to latest

Back in November, I think, I had a problem with a "wheel" not being compiled, so I was forced to install VC++ compiler from Microsoft to actually compile a ONE freakin' file, of course, it worked - seriously. :)

mart-hill avatar Jan 24 '23 00:01 mart-hill

I've had some sleep, it got to 4AM while I was figuring this out. I updated Python to 3.10.9. openOutpaint had some updates, as I'm not getting those huge stacktraces anymore. Riffusion was forcing Torch to an older version, which was causing Xformers to not work, so I disabled it. ddetailer was also causing an error, so I disabled that too.

Seems everything is fine now and SD runs like a charm.

RaymondTracer avatar Jan 24 '23 01:01 RaymondTracer

I tried the reinstall for both as well, the torch reinstall worked fine but the xformers failed and windows started throwing an unreadable error when the app reached the xformer part of its load. I deleted the entire venv folder and after it rebuilt it, the app begane working fine and is generating images without any errors.

Cerevox avatar Jan 24 '23 02:01 Cerevox

I had the same problem. Torchaudio in Riffusion gave me an error. I temporarily removed Riffusion and it worked.

SignalFlagZ avatar Jan 24 '23 02:01 SignalFlagZ

same here, tried reinstalling both. broken :/

bl4ckfyr3 avatar Jan 24 '23 06:01 bl4ckfyr3

Screen Shot 2023-01-24 at 1 07 05 AM error after --reinstall-xformers

screan avatar Jan 24 '23 09:01 screan

For me, the fix was: remove --xformers, run without it disable riffusion and openoutpaint extensions (sorry, I didn't check if only one of them was a problem) run once with --reinstall-torch (then remove it) run once with --reinstall-xformers (then remove it) run again with --xformers added back in

This seems to have fixed things for me.

Norgus avatar Jan 24 '23 10:01 Norgus

For me, the fix was: remove --xformers, run without it disable riffusion and openoutpaint extensions (sorry, I didn't check if only one of them was a problem) run once with --reinstall-torch (then remove it) run once with --reinstall-xformers (then remove it) run again with --xformers added back in

This seems to have fixed things for me.

Riffusion is the issue, it's forcing an old version of Torch, update openOutpaint and it'll work without issues. I'm closing this, the issue is solved now.

RaymondTracer avatar Jan 24 '23 10:01 RaymondTracer

Riffusion is the issue, it's forcing an old version of Torch, update openOutpaint and it'll work without issues. I'm closing this, the issue is solved now.

@RaymondTracer i do not have either of those extensions, but facing same issue. Suggested fix?

screan avatar Jan 24 '23 17:01 screan

@RaymondTracer did you try training an embedding? That still doesn't work for me after updating torch and xformers, the embedding has no effect after training. Was working fine before the torch & xformers update yesterday. No extensions installed.

IanD-FM avatar Jan 24 '23 18:01 IanD-FM

did you try training an embedding? That still doesn't work for me after updating torch and xformers, the embedding has no effect after training. Was working fine before the torch & xformers update yesterday. No extensions installed.

You need to rollback, I did. TI training works now.

bl4ckfyr3 avatar Jan 24 '23 20:01 bl4ckfyr3

You need to rollback, I did. TI training works now.

Yes, I ended up doing that, but that doesn't mean the issue is fixed as @RaymondTracer said, it's still a bug.

IanD-FM avatar Jan 24 '23 21:01 IanD-FM

After reinstalling torch I get many errors when creating a new model in Dreambooth. Does anyone else have this?

caretaker0815 avatar Jan 24 '23 21:01 caretaker0815

Riffusion is the issue, it's forcing an old version of Torch, update openOutpaint and it'll work without issues. I'm closing this, the issue is solved now.

@RaymondTracer i do not have either of those extensions, but facing same issue. Suggested fix?

Have you updated your copy of Python?

@RaymondTracer did you try training an embedding? That still doesn't work for me after updating torch and xformers, the embedding has no effect after training. Was working fine before the torch & xformers update yesterday. No extensions installed.

No, I haven't, I don't use them. That might need to be its own issue.


I'm going to reopen this issue, seems people are still having troubles.

RaymondTracer avatar Jan 24 '23 22:01 RaymondTracer

Updating torch and xformers also broke ddetailer extension: https://github.com/dustysys/ddetailer/issues/20

illtellyoulater avatar Jan 25 '23 03:01 illtellyoulater

Hello ! I fix the error, in this way :

  1. remove --xformers, run without it
  2. run once with --reinstall-torch (then remove it)
  3. run once with --reinstall-xformers (then remove it)
  4. run again with --xformers added back in
  5. Rename the "venv" folder , for example "venvOLD" , it gonna create a new clean "Vend"folder and all needed component like pytorch, xformer will be installed inside
  6. Run , If you dont have error , you can erase the "venvOLD" folder

neomio avatar Jan 25 '23 13:01 neomio

@neomio That worked very well. Thank you very much for your help.

caretaker0815 avatar Jan 25 '23 14:01 caretaker0815