stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Error: Connection errored out.
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
This is my first time making a report on github, so I might miss a few things here and there.
I was using it normally for the first 5 minutes, after adding a couple of models, and changed model for the first time, it stayed refreshing for 5 minutes so I decided to refresh the page and now I see errors all over the page, when I type in the input box it shows an error and I can't change models it shows error there, I tried reloading UI, reinstalling the whole git and deleting the huggingface folder from ./cache sometimes it works but after 1 minute it breaks again
I've tried checking the browser console all I see is 1 error and its
Firefox can’t establish a connection to the server at ws://127.0.0.1:7860/queue/join.
Steps to reproduce the problem
Unknown
What should have happened?
Should not show error messages and work normally.
Commit where the problem happens
955df775
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
None
List of extensions
Default
Console logs
venv "D:\Workspace\Stable Diffusion WebUI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 955df7751eef11bb7697e2d77f6b8a6226b21e13
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Loading weights [abcaf14e5a] from D:\Workspace\Stable Diffusion WebUI\stable-diffusion-webui\models\Stable-diffusion\anything-v3-full.safetensors
Creating model from config: D:\Workspace\Stable Diffusion WebUI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 5.2s (load weights from disk: 0.5s, create model: 0.6s, apply weights to model: 0.9s, apply half(): 0.9s, move model to device: 0.9s, load textual inversion embeddings: 1.3s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 15.7s (import torch: 2.7s, import gradio: 2.5s, import ldm: 0.6s, other imports: 2.6s, load scripts: 1.1s, load SD checkpoint: 5.6s, create ui: 0.4s, gradio launch: 0.1s).
Additional information
No response
Same here. Happened after the last update.
After trying a bunch of stuff I have found out that adding the code below at the bottom of the style.css file helps a bit, but problems still occur sometimes.
[id^="setting_"] > div[style*="position: absolute"] {
display: none !important;
}
I have the same error when I try to connect from another device, although everything is fine on the main computer
取消外网,清除代理信息就可以
I have the same error when I try to connect from another device, although everything is fine on the main computer
me too
Same problem, did a fresh install, always happens when sending an image to extras and trying to scale it
Happens with any upscaler, all the time, fresh install, any model, any VAE. Any suggestion to try?
export COMMANDLINE_ARGS="--no-gradio-queue"
取消外网,清除代理信息就可以
可部署在服务器上,只能用外网
I'm running into this issue as well
export COMMANDLINE_ARGS="--no-gradio-queue"
Same issue here on Ubuntu 2204. This fixed it for me (thanks!).
export COMMANDLINE_ARGS="--no-gradio-queue"
Been struggling with the same thing for a few days now. This does fix the ui but I require that the queue is working so hopefully we can figure out the reason it has been breaking everything :(
Has been broken in every commit I've tried since the option was added
How can I solve this problem?
Same issue, any way to solve this issue without flag with "--no-gradio-queue"? thanks
export COMMANDLINE_ARGS="--no-gradio-queue"
sorry dumb question, how do i run this command line? do i insert in the batch file? if i run on python it tells me syntax error
Just copy and paste export COMMANDLINE_ARGS="--no-gradio-queue"
in your cli tool which you use to run bash webui.sh
command, then press enter. And run this before run bash webui.sh
, or you can put this command line in your ~/.bashrc file
Just copy and paste
export COMMANDLINE_ARGS="--no-gradio-queue"
in your cli tool which you use to runbash webui.sh
command, then press enter. And run this before runbash webui.sh
, or you can put this command line in your ~/.bashrc file
usually i just run double click the batch file on windows. i dont usually run on a cli. do i run it like this?
then when i run my stable diffusion like this
export COMMANDLINE_ARGS="--no-gradio-queue"
this can solve the problem, while another problem emerged: when one task is processing, I cannot put a second one in task queue. After I click the "Generate" to submit second task, it will show "In queue..." forever
Remote Instance
- After updating the new newest version (today) i get the same error using the webui on a remote instance
using --listen --port 4000 --api --no-half --gradio-auth user:name --api-auth user:name --hide-ui-dir-config --cors-allow-origins=*
- request via API just works fine even queuing
Local Instance
- Works just fine.
- Testes with and without different commands
-
--listen --port 4000
works (with and without) -
--gradio-auth user:name
(with and without) -
--api --api-auth user:name
(with and without)
-
Remote Instance
- After updating the new newest version (today) i get the same error using the webui on a remote instance
using --listen --port 4000 --api --no-half --gradio-auth user:name --api-auth user:name --hide-ui-dir-config --cors-allow-origins=*
- request via API just works fine even queuing
Local Instance
Works just fine.
Testes with and without different commands
--listen --port 4000
works (with and without)--gradio-auth user:name
(with and without)--api --api-auth user:name
(with and without)
Which branch are you using? latest I see for the master branch is 2 weeks ago? 22bcc7b
Using master which includes 22bcc7be428c94e9408f589966c2040187245d81 does not fix the problem for me. The only workaround is using --no-gradio-queue
I am running ClashX proxy, when I quit the software, the error goes away.
export COMMANDLINE_ARGS="--no-gradio-queue"
worked for me. thanks!
export COMMANDLINE_ARGS="--no-gradio-queue"
Thank you, it works on my Ubuntu Server 20.04
Can I get confirmation if any of these issues are only happening on http connections that are using --gradio-auth? (except for @foxytocin who has already stated that they had issues either way.)
When gradio queue is enabled and tries to use websockets it attempts to access the login cookie for an https connection and fails to do so as only the one created from http exists.
Apparently a documented gradio issue. I've been trying to fix it for like two weeks. Just wish the people saying to use --no-gradio-queue would have mentioned that was the reason since I need the queue to be working.
Took me like 5 seconds to fix with an ssl cert once I knew that was the problem. I've wasted so much time thinking the queue implementation of the webui was the problem.
Anyway, that was the issue for me and I hope stating it here helps someone else.
如果这些问题中的任何一个只发生在使用 --gradio-auth 的 http 连接上,我能得到确认吗? (除了谁已经说过他们有问题。
当启用 gradio 队列并尝试使用 websockets 时,它会尝试访问 https 连接的登录 cookie 并失败,因为只有从 http 创建的那个存在。
显然是一个记录在案的 gradio 问题。我已经尝试修复它大约两周了。只是希望人们说使用--no-gradio-queue会提到这就是原因,因为我需要队列工作。
一旦我知道这是问题所在,我花了大约 5 秒钟来使用 ssl 证书进行修复。我浪费了很多时间认为webui的队列实现是问题所在。
无论如何,这对我来说是问题所在,我希望在这里陈述它可以帮助其他人。
Referring to gradio's bug fix, currently manually modifying the routes file can solve the problem,But there are still plug-ins that report errors here https://github.com/gradio-app/gradio/pull/3735/files
Preparing dataset... 0%| | 0/9 [00:00<?, ?it/s]/Users/me/Downloads/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') /Users/me/Downloads/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py:736: UserWarning: The operator 'aten::index.Tensor' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) pooled_output = last_hidden_state[ 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:04<00:00, 2.09it/s] /Users/me/Downloads/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:115: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling. warnings.warn("torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.") 0%| | 0/100000 [00:00<?, ?it/s]/AppleInternal/Library/BuildRoots/9941690d-bcf7-11ed-a645-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSNDArray/Kernels/MPSNDArrayConvolution.mm:1663: failed assertion Only Float32 convolution supported zsh: abort ./webui.sh me@MacBook-Pro stable-diffusion-webui % /usr/local/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
I keep getting errors about Only Float32 convolution supported. Anyone know? This leads to the timed out message stated above. I tried the command mentioned above, but same issue. Once the dataset is prepared with the 100% bar filled, the 2nd bar crashes.
Having this same issue running a queue in a remote instance
Rayregula Can you elaborate on you fixed this issue?
Can I get confirmation if any of these issues are only happening on http connections that are using --gradio-auth? (except for @foxytocin who has already stated that they had issues either way.)
When gradio queue is enabled and tries to use websockets it attempts to access the login cookie for an https connection and fails to do so as only the one created from http exists.
Apparently a documented gradio issue. I've been trying to fix it for like two weeks. Just wish the people saying to use --no-gradio-queue would have mentioned that was the reason since I need the queue to be working.
Took me like 5 seconds to fix with an ssl cert once I knew that was the problem. I've wasted so much time thinking the queue implementation of the webui was the problem.
Anyway, that was the issue for me and I hope stating it here helps someone else.
Yes, the issue only happens when using --gradio-auth
option.