stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: WebUI won't load on Windows 11, 3080
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Hello, and thanks for your hard work.
I've been trying to launch the webui as per instruction, but loading seem to fail. The only thing I was able to tackle down to is that the process hangs at https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/804d9fb83d0c63ca3acd36378707ce47b8f12599/modules/sd_models.py#L252
sd_model.to(shared.device)
, and forcing the load with CPU seem to help - but GPU variant isn't working for me.
Console output below
Steps to reproduce the problem
- Go to repository folder
- Press run
webui-user.bat
- Wait
What should have happened?
Web UI load
Commit where the problem happens
804d9fb83d0c63ca3acd36378707ce47b8f12599
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
No response
Additional information, context and logs
c:\DEV\StableDiffusion\webui>webui-user.bat
venv "c:\DEV\StableDiffusion\webui\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Commit hash: 804d9fb83d0c63ca3acd36378707ce47b8f12599
Fetching updates for K-diffusion...
Checking out commit for K-diffusion with hash: 60e5042ca0da89c14d1dd59d73883280f8fce991...
Installing requirements for Web UI
Launching Web UI with arguments:
Warning: caught exception 'invalid stoi argument', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [81761151] from C:\DEV\StableDiffusion\webui\models\Stable-diffusion\1-5-pruned-emaonly.ckpt
Global Step: 840000
Press any key to continue . . .
Versions:
❯ python --version
Python 3.10.8
❯ python -c "import torch; print(torch.__version__)"
1.12.1+cu113
❯ python -c "import torch; print(torch.version.cuda)"
11.3
As for specs, 64 gb RAM, Intel Core i7-10700KF, RTX 3080 (10 gb), Windows 11 (22H2, 22622.440).
tmp/*
won't tell much - stderr.txt
is blank, stdout.txt
only has generic drive information.
If there's anything else I can provide, please let me know.
How did you run webui-user.bat
?
For me when I double click it, it gets stuck. If I open CMD before and then run webui-user.bat
I get more things happening (still waiting for it to complete...).
Windows 10 x64 here, extremely slow webui loading here too when update to https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/804d9fb83d0c63ca3acd36378707ce47b8f12599
git pull
works fine (already up to date), launching the server is fine too (the cmd.exe output works normally, I saw Running on local URL: http://127.0.0.1:7860
soon after run the .bat)
But open http://127.0.0.1:7860/
in firefox only shows a blank page loading, after like 3~5 minutes the ui shows, and it's still loading (models and many options), then after like 2 minutes the loading indicator of the tab disappeared and everything just looks normal, generating image is working as usual too. I didn't saw any abnormal output in output of cmd0
The only problem that I noticed is the very slow starting time of webui
p.s. but when I update, git pull
works fine but the first time running webui-user.bat
shows warning of README.md
, k_diffusion/sampling
and setup.cfg
, I delete those and run again then it works (your local changes of the files would be overwritten by checkout
or something, I didn't change those files though)
How did you run
webui-user.bat
? For me when I double click it, it gets stuck. If I open CMD before and then runwebui-user.bat
I get more things happening (still waiting for it to complete...).
Tried Windows Terminal and "pure" cmd - never just a double-click.
As fir what @byzod metioned, it's not "extremely slow" for me, but "not loading at all" - app just shuts itself and waits for input to return to CMD (or powershell) prompt.
What might be worth mentioning, is that I see model tries to allocate memory (judging by resource monitor, python.exe gets up to ~8gb of RAM), but then it "deflates" back to a few hundred mbs.