Fooocus
Fooocus copied to clipboard
Queue prompts
Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.
Please let me know if this is feasible, as I think it would tremendously improve this app.
Thanks!
Hey, this is currently not possible in the UI but only via API. You can find examples and further information here: https://github.com/lllyasviel/Fooocus/issues/1259 & https://github.com/lllyasviel/Fooocus/issues/1496
That's unfortunate, but thank you for the response. I'll try to deal with the API somehow, but I'm not the best when it comes to that hahaha.
I have created simple prompt queue (I'm thinking of creating PR for this, but not in this exact form as its kinda dump right now, but it works), you can give it a try. Remember to disable auto update on startup as it will overwrite those changes.
diff --git a/webui.py b/webui.py
index a5138abf..581fda95 100644
--- a/webui.py
+++ b/webui.py
@@ -23,6 +23,27 @@ from modules.ui_gradio_extensions import reload_javascript
from modules.auth import auth_enabled, check_auth
+QUEUE = []
+
+
+def queue_add(*args):
+ QUEUE.append(args)
+
+
+def queue_start(*args):
+ if not QUEUE:
+ yield from generate_clicked(*args)
+ return
+ for arg in QUEUE:
+ yield from generate_clicked(*arg)
+ QUEUE.clear()
+ # To use every style in single prompt:
+ # for style in legal_style_names:
+ # argss = list(args)
+ # argss[2] = [style]
+ # yield from generate_clicked(*argss)
+
+
def generate_clicked(*args):
import ldm_patched.modules.model_management as model_management
@@ -110,7 +131,8 @@ with shared.gradio_root:
shared.gradio_root.load(lambda: default_prompt, outputs=prompt)
with gr.Column(scale=3, min_width=0):
- generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row', elem_id='generate_button', visible=True)
+ generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row_half', elem_id='generate_button', visible=True)
+ add_to_queue = gr.Button(label="Add to queue", value="Add to queue (0)", elem_classes='type_row_half', elem_id='add_to_queue', visible=True)
load_parameter_button = gr.Button(label="Load Parameters", value="Load Parameters", elem_classes='type_row', elem_id='load_parameter_button', visible=False)
skip_button = gr.Button(label="Skip", value="Skip", elem_classes='type_row_half', visible=False)
stop_button = gr.Button(label="Stop", value="Stop", elem_classes='type_row_half', elem_id='stop_button', visible=False)
@@ -560,9 +582,13 @@ with shared.gradio_root:
generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False), []), outputs=[stop_button, skip_button, generate_button, gallery]) \
.then(fn=refresh_seed, inputs=[seed_random, image_seed], outputs=image_seed) \
.then(advanced_parameters.set_all_advanced_parameters, inputs=adps) \
- .then(fn=generate_clicked, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
+ .then(fn=queue_start, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \
.then(lambda: (gr.update(visible=True), gr.update(visible=False), gr.update(visible=False)), outputs=[generate_button, stop_button, skip_button]) \
- .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed')
+ .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed') \
+ .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue])
+
+ add_to_queue.click(fn=queue_add, inputs=ctrls) \
+ .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue])
for notification_file in ['notification.ogg', 'notification.mp3']:
if os.path.exists(notification_file):
I have created simple prompt queue (I'm thinking of creating PR for this, but not in this exact form as its kinda dump right now, but it works), you can give it a try. Remember to disable auto update on startup as it will overwrite those changes.
diff --git a/webui.py b/webui.py index a5138abf..581fda95 100644 --- a/webui.py +++ b/webui.py @@ -23,6 +23,27 @@ from modules.ui_gradio_extensions import reload_javascript from modules.auth import auth_enabled, check_auth +QUEUE = [] + + +def queue_add(*args): + QUEUE.append(args) + + +def queue_start(*args): + if not QUEUE: + yield from generate_clicked(*args) + return + for arg in QUEUE: + yield from generate_clicked(*arg) + QUEUE.clear() + # To use every style in single prompt: + # for style in legal_style_names: + # argss = list(args) + # argss[2] = [style] + # yield from generate_clicked(*argss) + + def generate_clicked(*args): import ldm_patched.modules.model_management as model_management @@ -110,7 +131,8 @@ with shared.gradio_root: shared.gradio_root.load(lambda: default_prompt, outputs=prompt) with gr.Column(scale=3, min_width=0): - generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row', elem_id='generate_button', visible=True) + generate_button = gr.Button(label="Generate", value="Generate", elem_classes='type_row_half', elem_id='generate_button', visible=True) + add_to_queue = gr.Button(label="Add to queue", value="Add to queue (0)", elem_classes='type_row_half', elem_id='add_to_queue', visible=True) load_parameter_button = gr.Button(label="Load Parameters", value="Load Parameters", elem_classes='type_row', elem_id='load_parameter_button', visible=False) skip_button = gr.Button(label="Skip", value="Skip", elem_classes='type_row_half', visible=False) stop_button = gr.Button(label="Stop", value="Stop", elem_classes='type_row_half', elem_id='stop_button', visible=False) @@ -560,9 +582,13 @@ with shared.gradio_root: generate_button.click(lambda: (gr.update(visible=True, interactive=True), gr.update(visible=True, interactive=True), gr.update(visible=False), []), outputs=[stop_button, skip_button, generate_button, gallery]) \ .then(fn=refresh_seed, inputs=[seed_random, image_seed], outputs=image_seed) \ .then(advanced_parameters.set_all_advanced_parameters, inputs=adps) \ - .then(fn=generate_clicked, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \ + .then(fn=queue_start, inputs=ctrls, outputs=[progress_html, progress_window, progress_gallery, gallery]) \ .then(lambda: (gr.update(visible=True), gr.update(visible=False), gr.update(visible=False)), outputs=[generate_button, stop_button, skip_button]) \ - .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed') + .then(fn=lambda: None, _js='playNotification').then(fn=lambda: None, _js='refresh_grid_delayed') \ + .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue]) + + add_to_queue.click(fn=queue_add, inputs=ctrls) \ + .then(lambda: (gr.update(value=f"Add to queue ({len(QUEUE)})")), outputs=[add_to_queue]) for notification_file in ['notification.ogg', 'notification.mp3']: if os.path.exists(notification_file):
Interesting. You shouldn't have copied git reff, etc, just plain webui.sh. 😆
Hello! I'm sorry for being a noobie, but can u explain where exactly I have to add this code? I've disabled auto-updates by #1751 but didn't get where to add queue.
you can use git apply, but since you ask, i supposed you will do it by hand, so...
+
add the start means add this line, -
means remove. you can tell where to look at by checking lines without any sign, so for example
from modules.auth import auth_enabled, check_auth
+QUEUE = []
you can tell that you must add QUEUE = []
3 lines below existing line from modules.auth import auth_enabled, check_auth
.
the file you must modify is webui.py
ignore all metadata parts and all lines starting with @@ like this:
diff --git a/webui.py b/webui.py
index a5138abf..581fda95 100644
--- a/webui.py
+++ b/webui.py
@@ -23,6 +23,27 @@ from modules.ui_gradio_extensions import reload_javascript
This can be done, but you need to modify the original code, use wildcard files to store prompt word lists, modify the code to switch wildcard reading order and random reading, and set the number of images generated at one time.
Here's another way. https://github.com/lllyasviel/Fooocus/pull/1503
That's the way I said. https://github.com/lllyasviel/Fooocus/pull/1761
you can use git apply, but since you ask...
Can you tell if there is an easier way to implenet this code to generate all styles in order ? with "git apply"
See https://github.com/lllyasviel/Fooocus/discussions/1751
@docppp with "git apply" somehow wrote "corrupt patch at line 7" (used VScode and GitBash). I wrote by hand and it works! Notice that in my webui.py different coordinates (in urs "@@-560,9" in my it starts at 582). Maybe with last update it moved. (I can be easy wrong with anything bcs started learn only for this task)
Anyway, wrote by hand and it works!
Wanna ask is there anyways to add "Prompts from file or textbox" script in Fooocus UI? I have .txt file with 50 prompts on each line.
Im writing it from head, so this may need some tweaks but if you replace body of a queue_start function with following, it should work:
with open(text_file_path, 'r') as text_file:
lines = text_file.readlines()
for prompt in lines:
argss = list(args)
argss[0] = prompt
yield from generate_clicked(*argss)
@mashb1t (longshoot, but @lllyasviel as well)
I dont want to open another issue, but I would like to bring it to your attention once again (since you are kind of active lately ;))
I wrote I'm thinking of creating PR for this, but not in this exact form as its kinda dump right now
, but as I investigate a little bit more what are the options, the dumbest solution are sometimes the best ones. This queue on the webui level gives best flexibility as you can select any prompt, any style, even any model every time and its remember them all. Downsize of this solution is that every time models need to be loaded from scratch. From what i checked, queue could be introduced in async_worker as well, so model will be loaded, but you loose ability to change them in between generations. Im just not sure what solution would suit here best. Any thoughts?
@docppp it works, thank you! I've add only way to file:
def queue_start(*args):
text_file_path = 'C:/Users/blablablazhik/Desktop/Test.txt'
with open(text_file_path, 'r') as text_file:
lines = text_file.readlines()
for prompt in lines:
argss = list(args)
argss[0] = prompt.strip()
yield from generate_clicked(*argss)
But after test got one issue. "Fooocus V2" turns off after first prompt so "Fooocus V2 Expansion" doesn't write on 2nd, 3nd prompts. Maybe you know why ?
@docppp your queue proposal does indeed provide flexibility, but for queueing a few more things have to be considered:
- single user or multi user usecase
- some deployments of Fooocus rely on it not exceeding the (now configurable)
max_image_number
- fix for separation of sessions provided in my PRs 826 and 981
- this comment still isn't solved
- your suggestion to use a global called
QUEUE
for all users isn't suited for any multi user scenario as this shares data between all users. If user 1 would queue changes and user 2 would start the queue, user 2 would get results of user 1 again (also after the implementation of 981, as user 2 is technically the owner of the task. Your code works perfectly fine for non-shared installations though.
- some deployments of Fooocus rely on it not exceeding the (now configurable)
- parallelism (which Fooocus doesn't currently support, would need optimisations in async_worker.py). Gradio just isn't optimized for parallel usage at all, but excels in sequential generation.
- model loading performance loss
- default queue size
- There might be a default queue size, which we have to keep in mind (40 in Gradio 4.13.0, but i assume unlimited in 3.41.0, see Gradio docs).
- advanced parameter handling
- advanced parameters are currently shared between generations, so when queueing a render and changing an advanced parameter in the Developer Debug Mode tab, this overrides the previous queued entries. A solution would be to set all advanced params to your queue, then call advanced_parameters.set_all_advanced_parameters with them on queue execution in your while loop. This does not work for multi-user scenarios though as advanced_parameter uses globals, so no separation whatsoever.
- Gradio output
- this is currently handled per task and images will be lost if
--disable-image-log
is set. Handling too many images in one job though leads to other problems, which i already provided a fix for in 1013
- this is currently handled per task and images will be lost if
AFAIK Gradio was originally implemented based on the assumption that it is used on a private machine for personal use to make it as easy as possible for users to generate images (135#comment, 501 and 713 supporting this claim), for which it works great, , with multi-user capabilities as an afterthought (see points above). The community now more and more patches the code to make it work for other scenarios better.
It's hard to evaluate the full picture here without knowing the plans for the future of Fooocus.
To be specific, i'd propose to implement this feature the "right" way, not by using a global but a state (like state_is_generating
, which might also have other issues btw...) so it is separated per user and somehow a max queue size per user, as the addition of this feature has the potential to keep an instance hostage by queueing basically infinite times max_image_number
(maybe by adding an argument for maximum parallel queue_tasks_per_user
or similar).
I really like the approach you took and would like to offer help to optimise the code to fulfill above mentioned points. This will most likely be an advanced feature, so we might also hide it at first and not show the buttons directly before activating a checkbox in Developer Debug Mode
.
Let's also hear the opinion of other users. Your suggestions are welcome!
Thanks for the thorough comment Mashb1t! It excellently highlights some concerns like infinite queue and clogging the GPU. However, I don't think I have seen Fooocus for multiple users anywhere? Right now, I believe people download it and use it locally on their machines, just like I'm doing it.
Also, docppp should provide some examples of how to use this feature, as I have tried it but it doesn't work. It isn't documented very well, but I am willing to add step-by-step tutorials for other users and how should they use the queue if someone just shows me the essentials to make it work.
This all looks very promising, so I am more than willing to help in areas I can.
Tbh, I didnt even consider multi user scenario. As you said "assumption that it is used on a private machine for personal use", but if Fooocus is put into multi user direction, then indeed, queue system should be well thought thru.
I dont quite understand "default queue size" and "Gradio output" points. This type of queue basically simulates setting the option, typing the prompt and clicking generate one by one by user. If you referring to the gallery being shown after generation, adding a limit is a very simple solution.
I have prepared more clean version of my idea here https://github.com/lllyasviel/Fooocus/pull/1773
@LordMilutin The main idea is to create some sort of object that will remember everything you set up to the clicking of Queue button. It will be stored, so you can modify prompt or options, click once again and now you will have 2 sets of parameters stored. Clicking Generate will run as normal, but several times with those exact parameters.
@LordMilutin quick reference for multi user scenarios: https://github.com/lllyasviel/Fooocus/discussions/1639, https://github.com/lllyasviel/Fooocus/issues/1771, https://github.com/lllyasviel/Fooocus/issues/1607, all API issues like https://github.com/lllyasviel/Fooocus/issues/1224 or https://github.com/lllyasviel/Fooocus/issues/1259 etc. ^^
@docppp just tested #1773 - works exactly like I need for pipeline! Can I ask u for help to add this code with ability to read txt file with multiply prompts on each line? I've done that u wrote, but got missed up with setting of style after first prompt.
This isn't in 2.3.0...
Didn't find it too((
My bad, accidentally referred to in milestone and automatically closed.
My bad, accidentally referred to in milestone and automatically closed.
Ah bummer, I looked forward to it in this release. Any ETA when it will be implemented in the release?
@LordMilutin no, no ETA. I'll also be out for the next 2-3 weeks, feel free to check out the PR and make improvements based on it.
Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.
Not technically a queue system but you can achieve something similar by putting the 20 prompts onto 20 lines of a wildcard file and triggering that using the __wildcards__
syntax.
Is this being worked on?
Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.
Not technically a queue system but you can achieve something similar by putting the 20 prompts onto 20 lines of a wildcard file and triggering that using the
__wildcards__
syntax.
Is there tutorial on this somewhere?
Is there tutorial on this somewhere?
https://youtu.be/E_R7tnfXKCM?t=56
Is there tutorial on this somewhere?
https://youtu.be/E_R7tnfXKCM?t=56
Thank you very much bro 😁
Hello! I would like to know if it's possible to implement a prompt queue. For example, I have about 20 prompts that need to generate 30 images. Instead of waiting for the queue to finish for each prompt one by one and retyping another one, it would be awesome if we had a queue prompt option so that we could leave as many prompts as we want and leave the PC overnight to generate them, without user input in-between.
Not technically a queue system but you can achieve something similar by putting the 20 prompts onto 20 lines of a wildcard file and triggering that using the
__wildcards__
syntax.
Indeed, but I do not have control over it. If I put 20 prompts, and I run it 20 times, I can have the same prompt repeat multiple times, while some prompts will never trigger. That is why I am interested in queue mode that docpp made, and it worked perfectly before, but it is not compatible with newer versions of Fooocus. 😞