API generations with the SDXL example workflow only succeeds on the first generation
Console output:
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.57it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 3.46it/s]
Prompt executed in 14.06 seconds
got prompt
Prompt executed in 0.00 seconds
got prompt
Prompt executed in 0.00 seconds
got prompt
Prompt executed in 0.00 seconds
And here's my workflow:
{
"4": {
"inputs": {
"ckpt_name": "sd_xl_base_1.0.safetensors"
},
"class_type": "CheckpointLoaderSimple"
},
"5": {
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
},
"class_type": "EmptyLatentImage"
},
"6": {
"inputs": {
"text": "evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode"
},
"7": {
"inputs": {
"text": "text, watermark",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode"
},
"10": {
"inputs": {
"add_noise": "enable",
"noise_seed": 721897303308196,
"steps": 25,
"cfg": 8,
"sampler_name": "euler",
"scheduler": "normal",
"start_at_step": 0,
"end_at_step": 20,
"return_with_leftover_noise": "enable",
"model": [
"4",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSamplerAdvanced"
},
"11": {
"inputs": {
"add_noise": "disable",
"noise_seed": 0,
"steps": 25,
"cfg": 8,
"sampler_name": "euler",
"scheduler": "normal",
"start_at_step": 20,
"end_at_step": 10000,
"return_with_leftover_noise": "disable",
"model": [
"12",
0
],
"positive": [
"15",
0
],
"negative": [
"16",
0
],
"latent_image": [
"10",
0
]
},
"class_type": "KSamplerAdvanced"
},
"12": {
"inputs": {
"ckpt_name": "sd_xl_refiner_1.0.safetensors"
},
"class_type": "CheckpointLoaderSimple"
},
"15": {
"inputs": {
"text": "evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
"clip": [
"12",
1
]
},
"class_type": "CLIPTextEncode"
},
"16": {
"inputs": {
"text": "text, watermark",
"clip": [
"12",
1
]
},
"class_type": "CLIPTextEncode"
},
"17": {
"inputs": {
"samples": [
"11",
0
],
"vae": [
"12",
2
]
},
"class_type": "VAEDecode"
},
"19": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"17",
0
]
},
"class_type": "SaveImage"
}
}
basic_api_example.py and websockets_api_example.py remains unchanged. The issue persists with both scripts
Also, when I tried to change my prompt, restarted Comfy and tried generating, it still generated an image with the old prompt. There was no generations queued (in the GUI) beforehand, so something funky is going on
ComfyUi only re runs nodes if something changed so if you queue the same prompt twice it won't do anything.
I see. So I guess best practice would be to set/adjust batch_size for multiple generations with the same prompt?
change the seed.
I see.
Thanks for the quick answers, and also thanks for everything you guys are doing here. I've been having so much fun getting into programming while playing with the stuff you guys are making!