stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Error: Connection errored out.

Open HaruomiX opened this issue 1 year ago • 66 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

This is my first time making a report on github, so I might miss a few things here and there.

I was using it normally for the first 5 minutes, after adding a couple of models, and changed model for the first time, it stayed refreshing for 5 minutes so I decided to refresh the page and now I see errors all over the page, when I type in the input box it shows an error and I can't change models it shows error there, I tried reloading UI, reinstalling the whole git and deleting the huggingface folder from ./cache sometimes it works but after 1 minute it breaks again

image

I've tried checking the browser console all I see is 1 error and its Firefox can’t establish a connection to the server at ws://127.0.0.1:7860/queue/join.

Steps to reproduce the problem

Unknown

What should have happened?

Should not show error messages and work normally.

Commit where the problem happens

955df775

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

None

List of extensions

Default

Console logs

venv "D:\Workspace\Stable Diffusion WebUI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 955df7751eef11bb7697e2d77f6b8a6226b21e13
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Loading weights [abcaf14e5a] from D:\Workspace\Stable Diffusion WebUI\stable-diffusion-webui\models\Stable-diffusion\anything-v3-full.safetensors
Creating model from config: D:\Workspace\Stable Diffusion WebUI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 5.2s (load weights from disk: 0.5s, create model: 0.6s, apply weights to model: 0.9s, apply half(): 0.9s, move model to device: 0.9s, load textual inversion embeddings: 1.3s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.7s (import torch: 2.7s, import gradio: 2.5s, import ldm: 0.6s, other imports: 2.6s, load scripts: 1.1s, load SD checkpoint: 5.6s, create ui: 0.4s, gradio launch: 0.1s).

Additional information

No response

HaruomiX avatar Mar 28 '23 03:03 HaruomiX

Same here. Happened after the last update. Clipboard02 Clipboard01

AndreyDonchev avatar Mar 28 '23 03:03 AndreyDonchev

After trying a bunch of stuff I have found out that adding the code below at the bottom of the style.css file helps a bit, but problems still occur sometimes.

[id^="setting_"] > div[style*="position: absolute"] {
	display: none !important;
}

HaruomiX avatar Mar 28 '23 04:03 HaruomiX

I have the same error when I try to connect from another device, although everything is fine on the main computer

Oxygeniums avatar Mar 28 '23 04:03 Oxygeniums

取消外网,清除代理信息就可以

tangbaiwan avatar Mar 28 '23 05:03 tangbaiwan

I have the same error when I try to connect from another device, although everything is fine on the main computer

me too

bjl101501 avatar Mar 28 '23 08:03 bjl101501

Same problem, did a fresh install, always happens when sending an image to extras and trying to scale it

mik3lang3lo avatar Mar 28 '23 11:03 mik3lang3lo

image

Happens with any upscaler, all the time, fresh install, any model, any VAE. Any suggestion to try?

mik3lang3lo avatar Mar 28 '23 12:03 mik3lang3lo

export COMMANDLINE_ARGS="--no-gradio-queue"

RchGrav avatar Mar 28 '23 22:03 RchGrav

取消外网,清除代理信息就可以

可部署在服务器上,只能用外网

muzipiao avatar Mar 31 '23 02:03 muzipiao

I'm running into this issue as well

ProGamerGov avatar Mar 31 '23 22:03 ProGamerGov

export COMMANDLINE_ARGS="--no-gradio-queue"

Same issue here on Ubuntu 2204. This fixed it for me (thanks!).

hashnag avatar Apr 02 '23 06:04 hashnag

export COMMANDLINE_ARGS="--no-gradio-queue"

Been struggling with the same thing for a few days now. This does fix the ui but I require that the queue is working so hopefully we can figure out the reason it has been breaking everything :(

Has been broken in every commit I've tried since the option was added

Rayregula avatar Apr 02 '23 08:04 Rayregula

image How can I solve this problem?

EricChanc avatar Apr 02 '23 18:04 EricChanc

Same issue, any way to solve this issue without flag with "--no-gradio-queue"? thanks

terrificdm avatar Apr 03 '23 12:04 terrificdm

export COMMANDLINE_ARGS="--no-gradio-queue"

sorry dumb question, how do i run this command line? do i insert in the batch file? if i run on python it tells me syntax error

Shaiktit avatar Apr 04 '23 10:04 Shaiktit

Just copy and paste export COMMANDLINE_ARGS="--no-gradio-queue" in your cli tool which you use to run bash webui.sh command, then press enter. And run this before run bash webui.sh, or you can put this command line in your ~/.bashrc file

terrificdm avatar Apr 04 '23 10:04 terrificdm

Just copy and paste export COMMANDLINE_ARGS="--no-gradio-queue" in your cli tool which you use to run bash webui.sh command, then press enter. And run this before run bash webui.sh, or you can put this command line in your ~/.bashrc file

usually i just run double click the batch file on windows. i dont usually run on a cli. do i run it like this? image then when i run my stable diffusion like this image

Shaiktit avatar Apr 05 '23 03:04 Shaiktit

export COMMANDLINE_ARGS="--no-gradio-queue"

this can solve the problem, while another problem emerged: when one task is processing, I cannot put a second one in task queue. After I click the "Generate" to submit second task, it will show "In queue..." forever

issiah-chain avatar Apr 07 '23 07:04 issiah-chain

Remote Instance

  • After updating the new newest version (today) i get the same error using the webui on a remote instance using --listen --port 4000 --api --no-half --gradio-auth user:name --api-auth user:name --hide-ui-dir-config --cors-allow-origins=*
  • request via API just works fine even queuing

Local Instance

  • Works just fine.
  • Testes with and without different commands
    • --listen --port 4000 works (with and without)
    • --gradio-auth user:name (with and without)
    • --api --api-auth user:name (with and without)

foxytocin avatar Apr 07 '23 11:04 foxytocin

Remote Instance

  • After updating the new newest version (today) i get the same error using the webui on a remote instance using --listen --port 4000 --api --no-half --gradio-auth user:name --api-auth user:name --hide-ui-dir-config --cors-allow-origins=*
  • request via API just works fine even queuing

Local Instance

  • Works just fine.

  • Testes with and without different commands

    • --listen --port 4000 works (with and without)
    • --gradio-auth user:name (with and without)
    • --api --api-auth user:name (with and without)

Which branch are you using? latest I see for the master branch is 2 weeks ago? 22bcc7b

Rayregula avatar Apr 09 '23 14:04 Rayregula

Using master which includes 22bcc7be428c94e9408f589966c2040187245d81 does not fix the problem for me. The only workaround is using --no-gradio-queue

error

jpenalbae avatar Apr 09 '23 22:04 jpenalbae

I am running ClashX proxy, when I quit the software, the error goes away.

honunu avatar Apr 11 '23 03:04 honunu

export COMMANDLINE_ARGS="--no-gradio-queue"

worked for me. thanks!

Jackyboy1988 avatar Apr 13 '23 22:04 Jackyboy1988

export COMMANDLINE_ARGS="--no-gradio-queue"

Thank you, it works on my Ubuntu Server 20.04

Doublefire-Chen avatar Apr 13 '23 22:04 Doublefire-Chen

Can I get confirmation if any of these issues are only happening on http connections that are using --gradio-auth? (except for @foxytocin who has already stated that they had issues either way.)

When gradio queue is enabled and tries to use websockets it attempts to access the login cookie for an https connection and fails to do so as only the one created from http exists.

Apparently a documented gradio issue. I've been trying to fix it for like two weeks. Just wish the people saying to use --no-gradio-queue would have mentioned that was the reason since I need the queue to be working.

Took me like 5 seconds to fix with an ssl cert once I knew that was the problem. I've wasted so much time thinking the queue implementation of the webui was the problem.

Anyway, that was the issue for me and I hope stating it here helps someone else.

Rayregula avatar Apr 18 '23 15:04 Rayregula

如果这些问题中的任何一个只发生在使用 --gradio-auth 的 http 连接上,我能得到确认吗? (除了谁已经说过他们有问题。

当启用 gradio 队列并尝试使用 websockets 时,它会尝试访问 https 连接的登录 cookie 并失败,因为只有从 http 创建的那个存在。

显然是一个记录在案的 gradio 问题。我已经尝试修复它大约两周了。只是希望人们说使用--no-gradio-queue会提到这就是原因,因为我需要队列工作。

一旦我知道这是问题所在,我花了大约 5 秒钟来使用 ssl 证书进行修复。我浪费了很多时间认为webui的队列实现是问题所在。

无论如何,这对我来说是问题所在,我希望在这里陈述它可以帮助其他人。

Referring to gradio's bug fix, currently manually modifying the routes file can solve the problem,But there are still plug-ins that report errors here https://github.com/gradio-app/gradio/pull/3735/files

bjl101501 avatar Apr 20 '23 01:04 bjl101501

Preparing dataset... 0%| | 0/9 [00:00<?, ?it/s]/Users/me/Downloads/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling') /Users/me/Downloads/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py:736: UserWarning: The operator 'aten::index.Tensor' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) pooled_output = last_hidden_state[ 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:04<00:00, 2.09it/s] /Users/me/Downloads/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:115: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling. warnings.warn("torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.") 0%| | 0/100000 [00:00<?, ?it/s]/AppleInternal/Library/BuildRoots/9941690d-bcf7-11ed-a645-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSNDArray/Kernels/MPSNDArrayConvolution.mm:1663: failed assertion Only Float32 convolution supported zsh: abort ./webui.sh me@MacBook-Pro stable-diffusion-webui % /usr/local/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

I keep getting errors about Only Float32 convolution supported. Anyone know? This leads to the timed out message stated above. I tried the command mentioned above, but same issue. Once the dataset is prepared with the 100% bar filled, the 2nd bar crashes.

ghost avatar Apr 20 '23 02:04 ghost

Having this same issue running a queue in a remote instance

wjbeeson avatar Apr 25 '23 05:04 wjbeeson

Rayregula Can you elaborate on you fixed this issue?

wjbeeson avatar Apr 25 '23 05:04 wjbeeson

Can I get confirmation if any of these issues are only happening on http connections that are using --gradio-auth? (except for @foxytocin who has already stated that they had issues either way.)

When gradio queue is enabled and tries to use websockets it attempts to access the login cookie for an https connection and fails to do so as only the one created from http exists.

Apparently a documented gradio issue. I've been trying to fix it for like two weeks. Just wish the people saying to use --no-gradio-queue would have mentioned that was the reason since I need the queue to be working.

Took me like 5 seconds to fix with an ssl cert once I knew that was the problem. I've wasted so much time thinking the queue implementation of the webui was the problem.

Anyway, that was the issue for me and I hope stating it here helps someone else.

Yes, the issue only happens when using --gradio-auth option.

jpenalbae avatar Apr 25 '23 16:04 jpenalbae