stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: /queue/status
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I am using the /queue/status
endpoint to grab information about the queue. I've simulated multiple txt2img prompts that are queued up, in that they eventually process sequentially, however anytime that I hit the /queue/status
endpoint, some of the fields seem wrong. Here is an example response:
{
"msg": "estimation",
"rank": null,
"queue_size": 0,
"avg_event_process_time": 5.526303911209107,
"avg_event_concurrent_process_time": 1.1052607822418214,
"rank_eta": null,
"queue_eta": 0
}
Steps to reproduce the problem
- Queue up multiple txt2img prompts
- Hit the
/queue/status
endpoint
What should have happened?
In the example response above, it seems like the fields avg_event_process_time
and avg_event_concurrent_process_time
have correct values. However I do not expect rank
to be null, or that the queue_size
to be 0. No matter what I'm trying, I cannot seem to get the /queue/status
endpoint to spit out any other values for rank
or queue_size
.
Commit where the problem happens
a0d07fb5807ad55c8ccfdfc9a6d9ae3c62b9d211
What platforms do you use to access the UI ?
No response
What browsers do you use to access the UI ?
No response
Command Line Arguments
No
List of extensions
No
Console logs
webui-docker-auto-1 | + python -u webui.py --listen --port 7860 --allow-code --medvram --xformers --enable-insecure-extension-access --api
webui-docker-auto-1 | Removing empty folder: /stable-diffusion-webui/models/BSRGAN
webui-docker-auto-1 | Calculating sha256 for /stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt: c6bbc15e3224e6973459ba78de4998b80b50112b0ae5b5c67113d56b4e366b19
webui-docker-auto-1 | Loading weights [c6bbc15e32] from /stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt
webui-docker-auto-1 | Creating model from config: /stable-diffusion-webui/configs/v1-inpainting-inference.yaml
webui-docker-auto-1 | LatentInpaintDiffusion: Running in eps-prediction mode
webui-docker-auto-1 | DiffusionWrapper has 859.54 M params.
webui-docker-auto-1 | Applying xformers cross attention optimization.
webui-docker-auto-1 | Textual inversion embeddings loaded(0):
webui-docker-auto-1 | Model loaded in 16.9s (calculate hash: 11.3s, load weights from disk: 2.4s, create model: 0.7s, apply weights to model: 1.1s, apply half(): 0.4s, load VAE: 1.0s).
webui-docker-auto-1 | Running on local URL: http://0.0.0.0:7860
webui-docker-auto-1 |
webui-docker-auto-1 | To create a public link, set `share=True` in `launch()`.
webui-docker-auto-1 | Startup time: 21.6s (import torch: 1.1s, import gradio: 0.8s, import ldm: 0.3s, other imports: 1.5s, load scripts: 0.4s, load SD checkpoint: 16.9s, create ui: 0.3s, scripts app_started_callback: 0.1s).
webui-docker-auto-1 |
Total progress: 100%|██████████| 20/20 [00:07<00:00, 2.61it/s]
webui-docker-auto-1 | ████████| 20/20 [00:07<00:00, 5.56it/s]
Total progress: 100%|██████████| 20/20 [00:04<00:00, 4.68it/s]
webui-docker-auto-1 | ████████| 20/20 [00:04<00:00, 5.48it/s]
Total progress: 100%|██████████| 20/20 [00:04<00:00, 4.62it/s]
Additional information
No response
Did you solve this problem?
I also have this problem.
same issue here
same issue
Same here, is not working, even avg_event_process_time and avg_event_concurrent_process_time sometimes shows 0.
It seems that the /queue/status
is a endpoint inherited from gradio
, I haven't seen that there remains any special handlings for this endpoint in webui
. For my situation, I am trying to use the webui
as a stand alone http server receiving user's draw calls from http requsets, and I want to fetch the status of the queue(in-queue task count, current task, remaining time for in-queue tasks to finish, etc.) in order to give a feedback to the users.
Well, I found that module/progress.py
looks like a good place to add the queue status querying apis.
I have the same issue here in the 1.6.0 release, my goal is to make a load balancer like server that will take the user to the least occupied instance of my web-ui servers.
I have the same issue here in the 1.6.0 release, my goal is to make a load balancer like server that will take the user to the least occupied instance of my web-ui servers.
hello, have you solved it yet?
I have the same issue here in the 1.6.0 release, my goal is to make a load balancer like server that will take the user to the least occupied instance of my web-ui servers.
hello, have you solved it yet?
Hello, In our case we ended up doing a simple round robin load balancer. For the most part it did the job, especially since we just told the users if they find themselves stuck in long queues, they can re-access the WebUI and it'll switch them to a different instance. We're currently working on a better health checks solution based on user activity, but as of now, we couldn't make use of webui's api to actually know about the queues status. Since images are generated relatively quickly, it wouldn't be the most ideal way to distribute instances I think. As opposed to just keep track of user sessions in each instance.
same issue