ComfyUI
ComfyUI copied to clipboard
prompts queue but do not execute
Tried the "api example": I can read multiple "got prompt" on cmd but no execution at all. I open up the browser interface, and hit "queue prompt" there to test, and I get another "got prompt" on the cmd and the "queue size" counter goes to 1 and to 0 immediately. Am I doing something wrong? Should I call "execute" or something on my python script, so it starts the queue? Cause the browser interface wont start it too. Thanks.
Oh that's because ComfyUI has an optimization where it only executes nodes again if something changed from the last execution.
This means if you queue the same thing twice it won't execute anything.
facepalm sorry. It works.
One more thing:
How can I go about using full image paths on loadImage?
the value on prompt["12"]["inputs"]["image"] doesnt work as a full path.
Tried clearing it out and changing prompt["12"]["inputs"]["choose file to upload"] to path also, but no dice.
I'm trying to build an iterator to process a batch of images on a folder.
Thanks!
Right now you have to POST the image to /upload/image like the UI does which will return a name in the json response that you can use in the LoadImage node.
I can see why.
In the loadImage node, you call
image_path = os.path.join(self.input_dir, image)
so the "image" variable is the name of the image and the folder is set above, by input_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "input")
I'm thinking about copying this class as a custom node where the "image" value is the full path. The alternative would be to add a boolean field here to join the folder path or not, true by default, but I think ti could break compatibility for older JSONs generated and retrieved without this field.
Just sharing my use case.
Right now you have to POST the image to /upload/image like the UI does which will return a name in the json response that you can use in the LoadImage node.
oh, I see, thanks again.
Oh that's because ComfyUI has an optimization where it only executes nodes again if something changed from the last execution.
This means if you queue the same thing twice it won't execute anything.
Honestly to date that feature has only given me problems and no benefit. Also, if you use the same folders for batch processing, the program can't tell if you change the contents of the folders, so it prevents you from re-processing the same folders even though the content is totally different. I strongly hope that such behavior will be allowed to be disabled.
This is also the feature that keeps models loaded in memory in between prompt executions.
If you use the LoadImage node it actually checks if the image changed on disk using a hash so it will reexecute properly if it changed: https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py#L832
Honestly to date that feature has only given me problems and no benefit.
I strongly disagree, that feature is incredible for stuff like two pass workflows where one might only want to repeat the second pass and much more. Because of this I don't have to bother saving the first pass since it stays in cached as long as I need.
This is also the feature that keeps models loaded in memory in between prompt executions.
If you use the LoadImage node it actually checks if the image changed on disk using a hash so it will reexecute properly if it changed: https://github.com/comfyanonymous/ComfyUI/blob/master/nodes.py#L832
Im using batch image node from was. You cannot do batch with Loadimage.
Honestly to date that feature has only given me problems and no benefit.
I strongly disagree, that feature is incredible for stuff like two pass workflows where one might only want to repeat the second pass and much more. Because of this I don't have to bother saving the first pass since it stays in cached as long as I need.
How can you even disagree with that sentence? Its MY experience, not an opinion to disagree o agree with.
@AbyszOne have you managed to generate a sequence of images from input folder using API?
@comfyanonymous To the Dev:
I strongly disagree, that feature is incredible for stuff like two pass workflows
@throwaway-mezzo-mix he was not been to disable by defaults, but @AbyszOne says that it is good to have option to disable it!
and I Agree with him
Take a look at the bellow example:
I have 90 images and i want to generate corresponding SD images [using Comfy API] it starts at a very low memory
and then gets step by step higher:
after about 15 generated images
the usage gets higher and higher on Memory About [11.6 GB]
Its on 21th image:
and after 23th generated image it gets crash because of high Memory Usage![ kernel down]
Can @comfyanonymous you please take a look at this and give it a try [monitor memory]? and provide possible fix for that?
Thanks
@AbyszOne Have you experienced the same behavior?
Closing since original question was addressed. @p0mad If the memory leak issue is still happening please open a new issue.