stable-diffusion-webui
stable-diffusion-webui copied to clipboard
Generating batches locks the UI in a broken state
Using google collab:
Any time I try to generate multiple images using txt2img (using batch count or batch size), the generation finishes and the images save to file, but the image display widget on the right side of the screen doesn't update. It will keep the last set of images, if there was one, or stay blank if there was not. It also keeps the old set of metadata text at the bottom.
After clicking the "generate" button again after this, nothing will happen. The progress bar does not appear, and the program does not try to generate another image. This "stuck" state persists until I refresh the browser page. The server tool does not seem to need to be restarted.
Commit hash: 3f417566b0bda8eab05d247567aebf001c1d1725
There are no errors in the javascript console or the google collab console output, even with --gradio-debug
set.
In the webui-user.bat
update the argument COMMANDLINE_ARGS
with the following:
set COMMANDLINE_ARGS=--precision full --no-half --always-batch-cond-uncond
See if that works?
Again, this is known bug from Gradio https://github.com/gradio-app/gradio/issues/2260
That was my first thought, except I don't see that network error in my devtools. Unless the network error is happening between collab and gradio, which I guess I wouldn't be able to see. Strange that it's just silently eating the error though.
always No solution?
You can use ngrok to create a link to a local Gradio app. I edited a colab from Voldemort and implemented ngrok in fork, seems to be working: https://colab.research.google.com/drive/1kiWnIFYbq4mk2JtVNfxwlo3NuVUpiFzb?usp=sharing Kind of a workaround, but still hope this helps.
heads up for anyone using Paperspace instead of Colab - they will lock your account if you try to use an ngrok proxy, forcing you to email support to get back in.
I've got a similar error, though I'm not doing anything with Colab.
I'm running in docker (kubernetes), and the error is caused by a gateway timeout on what looks like the original /api/predict
request, which is probably caused by my nginx config. Progress updates show correctly, but once the generation is complete there's no result in the gallery and the "generate" button no longer responds.
Not sure if this is the same error as OP or unrelated with the same symptoms. I'm not using --share
and the network requests for progress updates are all under 2MB.
Same issue, after first generation the Generator button disappears and I cannot longer generate unless I refresh
Don't bother with ngrok. This code snippit is all you need for colab:
from google.colab.output import eval_js
port_num = 7860
print("Click the link below to visit the server")
print(eval_js(f"google.colab.kernel.proxyPort({port_num})"))
That will print a URL you can visit that will forward your browser to the specified port number.
Don't bother with ngrok. This code snippit is all you need for colab:
from google.colab.output import eval_js port_num = 7860 print("Click the link below to visit the server") print(eval_js(f"google.colab.kernel.proxyPort({port_num})"))
That will print a URL you can visit that will forward your browser to the specified port number.
Thank you for the heads up, I tried it and it opens the Web UI but it doesn't work, I got some errors:
POST https://i7xt7xe04rp-496ff2e9c6d22111-XXXX-colab.googleusercontent.com/api/predict/ 500 (Not allowed.)
VM452:1 Uncaught (in promise) SyntaxError: Unexpected token 'N', "Not allowed." is not valid JSON
Do you have an example how you are using it?
Yeah, I hadn't used it an a while. It seems to be an issue with POST requests not having the proper third party cookies. I just switched to localtunnel.me which works fine and doesn't require authentication like NGROK.
npm install -g localtunnel
!$GRADIO_COMMAND & lt --port 7860
Perfect!! now I have 3 options to run the web service in Colab: Gradio, ngrok and LocalTunnel, thank you!!
Still happening with colab. Tried both gradio server and localtunnel.me, same problem.
Problem only occurs when batch number > 3 and image size > 512x512
Same issue as tsaost happening on my device. Tried the ngrok but not localtunnel.
https://github.com/gradio-app/gradio/issues/2260