stable-diffusion-webui
stable-diffusion-webui copied to clipboard
Stuck on "applying cross attentionoptimization doggettx" ROCm on RX6600M [Bug]:
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Running ROCm on RX6600M followed serverneko's guide to modify webui.sh to install torch and torchvision for ROCm 5.4.2 instead of 5.4. Gives "no module xformers" and proceeds Gets stuck at the point "applying cross attention detail doggettx" Is it xformers? How do I proceed if it is and if not? Done pip install xformers
Steps to reproduce the problem
- Edit webui.sh
- Comment out line of torch in reqiurements.txt and requirements_versions.txt
- Run webui.sh
What should have happened?
Should have opened the UI
Commit where the problem happens
22bcc7b
What platforms do you use to access the UI ?
Linux
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
no
List of extensions
no
Console logs
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on xx user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
/media/veku/heavy/ROCMstablediff/venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Loading weights [88ecb78256] from /media/veku/heavy/ROCMstablediff/models/Stable-diffusion/v2-1_512-ema-pruned.ckpt
Creating model from config: /media/veku/heavy/ROCMstablediff/repositories/stable-diffusion-stability-ai/configs/stable-diffusion/v2-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
Applying cross attention optimization (Doggettx).
Additional information
No response
the process is really stuck. ctrl+c doesnt display anything and my gpu usage stays at 90-100 after closing terminal
Same issue here with an rx 7900xtx on archlinux
Why does it apply doggettx's? with --skip-torch-cuda-test, it applies InvokeAI's, works. I dont know what all this is. I dont want to use it on CPU
For me it even gets stuck with --disable-opt-split-attention
, so I would suspect that it is related to the step after applying the cross attention optimization.
Running on ArchLinux and hip-runtime-amd
at version 5.4.3-1
on an AMD Radeon RX 5700.
After some debug print
s, I was able to isolate the problem to sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
(https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/sd_models.py#L469).
I then proceeded to delete all textural inversion embeddings I had (in ./embeddings
), which in my case was just one I once experimented with. After that, the UI started up normally. ~~I wasn't able to test it generation works yet tho.~~
Image generation however does not work and hangs with similar symptoms (full GPU usage in shader interpolator, one CPU core fully used).
Is there a workaround for this at all? Got this after updating to torch 2.0.1+rocm5.4.2
. Manjaro and rx 5700 xt
The token save the last "Max steps". Increase it.