stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: /queue/status

Open mpoon opened this issue 1 year ago • 10 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I am using the /queue/status endpoint to grab information about the queue. I've simulated multiple txt2img prompts that are queued up, in that they eventually process sequentially, however anytime that I hit the /queue/status endpoint, some of the fields seem wrong. Here is an example response:

{
  "msg": "estimation",
  "rank": null,
  "queue_size": 0,
  "avg_event_process_time": 5.526303911209107,
  "avg_event_concurrent_process_time": 1.1052607822418214,
  "rank_eta": null,
  "queue_eta": 0
}

Steps to reproduce the problem

  1. Queue up multiple txt2img prompts
  2. Hit the /queue/status endpoint

What should have happened?

In the example response above, it seems like the fields avg_event_process_time and avg_event_concurrent_process_time have correct values. However I do not expect rank to be null, or that the queue_size to be 0. No matter what I'm trying, I cannot seem to get the /queue/status endpoint to spit out any other values for rank or queue_size.

Commit where the problem happens

a0d07fb5807ad55c8ccfdfc9a6d9ae3c62b9d211

What platforms do you use to access the UI ?

No response

What browsers do you use to access the UI ?

No response

Command Line Arguments

No

List of extensions

No

Console logs

webui-docker-auto-1  | + python -u webui.py --listen --port 7860 --allow-code --medvram --xformers --enable-insecure-extension-access --api
webui-docker-auto-1  | Removing empty folder: /stable-diffusion-webui/models/BSRGAN
webui-docker-auto-1  | Calculating sha256 for /stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt: c6bbc15e3224e6973459ba78de4998b80b50112b0ae5b5c67113d56b4e366b19
webui-docker-auto-1  | Loading weights [c6bbc15e32] from /stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt
webui-docker-auto-1  | Creating model from config: /stable-diffusion-webui/configs/v1-inpainting-inference.yaml
webui-docker-auto-1  | LatentInpaintDiffusion: Running in eps-prediction mode
webui-docker-auto-1  | DiffusionWrapper has 859.54 M params.
webui-docker-auto-1  | Applying xformers cross attention optimization.
webui-docker-auto-1  | Textual inversion embeddings loaded(0):
webui-docker-auto-1  | Model loaded in 16.9s (calculate hash: 11.3s, load weights from disk: 2.4s, create model: 0.7s, apply weights to model: 1.1s, apply half(): 0.4s, load VAE: 1.0s).
webui-docker-auto-1  | Running on local URL:  http://0.0.0.0:7860
webui-docker-auto-1  |
webui-docker-auto-1  | To create a public link, set `share=True` in `launch()`.
webui-docker-auto-1  | Startup time: 21.6s (import torch: 1.1s, import gradio: 0.8s, import ldm: 0.3s, other imports: 1.5s, load scripts: 0.4s, load SD checkpoint: 16.9s, create ui: 0.3s, scripts app_started_callback: 0.1s).
webui-docker-auto-1  |
Total progress: 100%|██████████| 20/20 [00:07<00:00,  2.61it/s]
webui-docker-auto-1  | ████████| 20/20 [00:07<00:00,  5.56it/s]
Total progress: 100%|██████████| 20/20 [00:04<00:00,  4.68it/s]
webui-docker-auto-1  | ████████| 20/20 [00:04<00:00,  5.48it/s]
Total progress: 100%|██████████| 20/20 [00:04<00:00,  4.62it/s]

Additional information

No response

mpoon avatar Mar 26 '23 00:03 mpoon

Did you solve this problem?

lonely1215225 avatar May 13 '23 09:05 lonely1215225

I also have this problem.

dixin-1129 avatar Jun 09 '23 06:06 dixin-1129

same issue here

lycfyi avatar Jun 14 '23 03:06 lycfyi

same issue

Rabithua avatar Jun 23 '23 08:06 Rabithua

Same here, is not working, even avg_event_process_time and avg_event_concurrent_process_time sometimes shows 0.

Xijamk avatar Jul 29 '23 02:07 Xijamk

It seems that the /queue/status is a endpoint inherited from gradio, I haven't seen that there remains any special handlings for this endpoint in webui. For my situation, I am trying to use the webui as a stand alone http server receiving user's draw calls from http requsets, and I want to fetch the status of the queue(in-queue task count, current task, remaining time for in-queue tasks to finish, etc.) in order to give a feedback to the users.

Well, I found that module/progress.py looks like a good place to add the queue status querying apis.

n41l avatar Sep 04 '23 08:09 n41l

I have the same issue here in the 1.6.0 release, my goal is to make a load balancer like server that will take the user to the least occupied instance of my web-ui servers.

oumad avatar Oct 11 '23 09:10 oumad

I have the same issue here in the 1.6.0 release, my goal is to make a load balancer like server that will take the user to the least occupied instance of my web-ui servers.

hello, have you solved it yet?

lyggyhmm avatar May 16 '24 05:05 lyggyhmm

I have the same issue here in the 1.6.0 release, my goal is to make a load balancer like server that will take the user to the least occupied instance of my web-ui servers.

hello, have you solved it yet?

Hello, In our case we ended up doing a simple round robin load balancer. For the most part it did the job, especially since we just told the users if they find themselves stuck in long queues, they can re-access the WebUI and it'll switch them to a different instance. We're currently working on a better health checks solution based on user activity, but as of now, we couldn't make use of webui's api to actually know about the queues status. Since images are generated relatively quickly, it wouldn't be the most ideal way to distribute instances I think. As opposed to just keep track of user sessions in each instance.

oumad avatar May 16 '24 17:05 oumad

same issue

ikun-404 avatar Jul 03 '24 06:07 ikun-404