ComfyUI
ComfyUI copied to clipboard
Allow the prompt request to specify the prompt ID.
This makes it easier to write asynchronous clients that submit requests, because they can store the task by id immediately rather than waiting for the prompt api to return.
Specifically I want it so that Krita AI Diffusion can stop fetching the prompt id async, because I suspect it's causing a timing issue.
Should I change the API demo to generate a uuid on the client side?
Good change. Maybe there can be an example/comment/line added here https://github.com/comfyanonymous/ComfyUI/blob/master/script_examples/basic_api_example.py since there is no api spec for the webserver yet.
Added an example to the websocket api test, because the simple api test doesn't do anything with the response.
This is fine but you need to add a check to see if the prompt id is already used if it is specified and return an appropriate error.
Okay so I can do that but it's gonna be ugly. Because of the PromptQueue mutex, the only place to check that is in PromptQueue.put, otherwise there's always a chance of timing errors. But right now put doesn't even know what a prompt_id is. If there's a trivial way to do it I don't see it. I'd just trust the client to rng correctly.
@FeepingCreature we're taking a look at this again (we now need this feature ourselves) and I'll get back to you likely within the next day. It's been a while so just wanted to check if you're still around to make some changes if we need to? No worries if you're busy, we can build on top of your PR. Thanks a lot for this!
Cheers! I'm here, just lmk what you need.
So I agree with @comfyanonymous that we should check for duplicate prompt ids - it's not difficult to have a frontend bug where we send the same prompt id multiple times and ideally the server wouldn't end up in an inconsistent state because of it. Duplicate front ID breaks the server is subtle ways, for example the history list is keyed by prompt id. In general, it's good to maintain the property that prompt ids are unique.
Do you want to keep track of the used prompt ids in PromptQueue with a set? In PromptQueue.put, inside the mutex section, you can check the set for reuse (and throw an error if so), and add it to the set otherwise. In PromptServer post_prompt, you can catch the error and return an error response. This is technically memory leak, but I see we allow 10,000 history items (which is a lot) and prompt id is a subset of each history item, so it should be find for now and we can optimize later.
Alright, how about this? I refactored the job queue a bit so that it just uses the prompt_id for everything, but it's a bit smelly now in that it exploits the fact that it knows item[1] is the prompt_id. I didn't want to change the API because I wasn't sure if any extensions used the job queue directly.
edit: Hm, though I changed get already so I might as well.
edit: Hm, no I'd have to change get_current_queue which I don't trust. I think this fairly minimal change is fine actually.
Thanks for the changes! They look good to me
Summary of changes since last update:
- We're keeping track of all the used prompt IDs in PromptQueue and guaranteeing that the id will be unique
- We're getting rid of task_counter, which was the auto-incrementing unique ID of a task before. We use prompt id in its place.
I tested the PR locally with latest frontend. I can queue jobs, cancel jobs, see them in history. Everything seems to work.
@comfyanonymous / any other code owners: want to approve and merge?
@FeepingCreature many apologies for the churn, but we discussed this more with the core team and the team decided not to allow such explicit duplicate tracking. We'll allow duplicate prompt ids, as long as it doesn't break the server. Would you mind reverting back to your previous implementation?
I did a bunch of testing, and found that the only way to break the server is if we receive prompt IDs of inconsistent types (e.g. int and str or list and str). We should just verify that the prompt ID is string before putting it in the queue, and as long as we do that:
- the server never crashes no matter what we do
- if two prompts have same prompt id, the later one replaces the previous one in history
- if multiple prompts have same prompt id and number, then heapq.heappush can fail because the tuple comparison fall through all the way to the prompt dict, and we get a "< not supported for dicts" error. The server returns a 500 internal server error, but it doesn't crash, so it's essentially the equivalent of an error response. Server keeps working fine after - no inconsistent state.
So the only change we'd like you to make beyond the previous implementation is validating the prompt id is a string (and returning an error if not).
I just moved str to the outside, is that fine?
Hah, yeah that works. @comfyanonymous works for you? I tested:
- No change in ComfyUI frontend task running
- If client passes in different types (int, str, list) in the request json, everything just gets converted to string. Nothing crashes.
- I can still force the "< is invalid for dict" error if there are too many requests with same prompt id and number, but that doesn't crash server. Otherwise we can use the number field to queue at front / in the back and it all works.
I actually really like this PR, as being able to specify prompt_id is very handy.