Error Running Complex Workflow
Expected Behavior
Execution completes properly
Actual Behavior
Execution terminates with an error
Steps to Reproduce
Load workflow Run workflow
Debug Logs
# ComfyUI Error Report
## Error Details
- **Node Type:** SamplerCustomAdvanced
- **Exception Type:** torch.cuda.OutOfMemoryError
- **Exception Message:** Allocation on device
## Stack Trace
File "E:\ComfyUI\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 716, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 695, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 600, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 653, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 299, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 682, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 685, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\model_base.py", line 142, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\ldm\flux\model.py", line 159, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\ldm\flux\model.py", line 118, in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\ldm\flux\layers.py", line 166, in forward
torch.cat((txt_v, img_v), dim=2), pe=pe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
System Information
- ComfyUI Version: v0.2.2-43-ge813abb
- Arguments: ComfyUI\main.py --windows-standalone-build
- OS: nt
- Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.3.0+cu121
Devices
- Name: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
- Type: cuda
- VRAM Total: 25756565504
- VRAM Free: 23962058752
- Torch VRAM Total: 67108864
- Torch VRAM Free: 33554432
Logs
2024-09-16 15:33:28,561 - root - INFO - Total VRAM 24563 MB, total RAM 65478 MB
2024-09-16 15:33:28,561 - root - INFO - pytorch version: 2.3.0+cu121
2024-09-16 15:33:30,902 - root - INFO - xformers version: 0.0.26.post1
2024-09-16 15:33:30,902 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-16 15:33:30,903 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2024-09-16 15:33:31,135 - root - INFO - Using xformers cross attention
2024-09-16 15:33:32,227 - root - INFO - [Prompt Server] web root: E:\ComfyUI\ComfyUI\web
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path checkpoints E:/a1111/stable-diffusion-webui/models/Stable-diffusion
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path configs E:/a1111/stable-diffusion-webui/models/Stable-diffusion
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path vae E:/a1111/stable-diffusion-webui/models/VAE
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path loras E:/a1111/stable-diffusion-webui/models/Lora
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path loras E:/a1111/stable-diffusion-webui/models/LyCORIS
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path upscale_models E:/a1111/stable-diffusion-webui/models/ESRGAN
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path upscale_models E:/a1111/stable-diffusion-webui/models/RealESRGAN
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path upscale_models E:/a1111/stable-diffusion-webui/models/SwinIR
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path embeddings E:/a1111/stable-diffusion-webui/embeddings
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path hypernetworks E:/a1111/stable-diffusion-webui/models/hypernetworks
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path controlnet E:/a1111/stable-diffusion-webui/models/ControlNet
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path ipadapter E:/a1111/stable-diffusion-webui/models/ipadapter
2024-09-16 15:33:32,233 - root - INFO - Adding extra search path instantid E:/a1111/stable-diffusion-webui/models/instantid
2024-09-16 15:33:35,553 - root - INFO - Total VRAM 24563 MB, total RAM 65478 MB
2024-09-16 15:33:35,553 - root - INFO - pytorch version: 2.3.0+cu121
2024-09-16 15:33:35,553 - root - INFO - xformers version: 0.0.26.post1
2024-09-16 15:33:35,553 - root - INFO - Set vram state to: NORMAL_VRAM
2024-09-16 15:33:35,553 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2024-09-16 15:33:37,423 - root - INFO - --------------
2024-09-16 15:33:37,423 - root - INFO - [91m ### Mixlab Nodes: [93mLoaded
2024-09-16 15:33:37,423 - root - INFO - ChatGPT.available True
2024-09-16 15:33:37,424 - root - INFO - edit_mask.available True
2024-09-16 15:33:37,428 - root - INFO - LaMaInpainting.available True
2024-09-16 15:33:38,014 - root - INFO - ClipInterrogator.available True
2024-09-16 15:33:38,397 - root - INFO - PromptGenerate.available True
2024-09-16 15:33:38,397 - root - INFO - ChinesePrompt.available True
2024-09-16 15:33:38,397 - root - INFO - RembgNode_.available True
2024-09-16 15:33:39,389 - root - INFO - TripoSR.available
2024-09-16 15:33:39,390 - root - INFO - MiniCPMNode.available
2024-09-16 15:33:39,443 - root - INFO - Scenedetect.available
2024-09-16 15:33:39,537 - root - INFO - FishSpeech.available
2024-09-16 15:33:39,537 - root - INFO - [93m -------------- [0m
2024-09-16 15:33:41,492 - root - INFO -
Import times for custom nodes:
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\websocket_image_save.py
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_SimpleMath
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\cg-use-everywhere
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\lora-info
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_image2halftone
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ControlAltAI-Nodes
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\mikey_nodes
2024-09-16 15:33:41,492 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyMath
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_essentials
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-DepthAnythingV2
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\rgthree-comfy
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Florence2
2024-09-16 15:33:41,493 - root - INFO - 0.0 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_YFG_Comical
2024-09-16 15:33:41,493 - root - INFO - 0.1 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2024-09-16 15:33:41,493 - root - INFO - 0.1 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-09-16 15:33:41,493 - root - INFO - 0.1 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_MaraScott_Nodes
2024-09-16 15:33:41,493 - root - INFO - 0.2 seconds: E:\ComfyUI\ComfyUI\custom_nodes\comfyui-ollama
2024-09-16 15:33:41,493 - root - INFO - 0.3 seconds: E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager
2024-09-16 15:33:41,493 - root - INFO - 1.5 seconds: E:\ComfyUI\ComfyUI\custom_nodes\was-node-suite-comfyui
2024-09-16 15:33:41,493 - root - INFO - 1.8 seconds: E:\ComfyUI\ComfyUI\custom_nodes\comfyui-art-venture
2024-09-16 15:33:41,493 - root - INFO - 3.6 seconds: E:\ComfyUI\ComfyUI\custom_nodes\comfyui-mixlab-nodes
2024-09-16 15:33:41,493 - root - INFO -
2024-09-16 15:33:41,514 - root - INFO -
2024-09-16 15:33:41,514 - root - INFO -
Starting server
2024-09-16 15:33:41,514 - root - INFO - [93mTo see the GUI go to: http://10.0.1.4:8188 or http://127.0.0.1:8188
2024-09-16 15:33:41,514 - root - INFO - [93mTo see the GUI go to: https://10.0.1.4:8189 or https://127.0.0.1:8189[0m
2024-09-16 15:33:48,859 - httpx - INFO - HTTP Request: GET http://10.0.1.3:11434/api/tags "HTTP/1.1 200 OK"
2024-09-16 15:33:49,060 - httpx - INFO - HTTP Request: GET http://10.0.1.3:11434/api/tags "HTTP/1.1 200 OK"
2024-09-16 15:41:46,660 - httpx - INFO - HTTP Request: GET http://10.0.1.3:11434/api/tags "HTTP/1.1 200 OK"
2024-09-16 15:41:46,860 - httpx - INFO - HTTP Request: GET http://10.0.1.3:11434/api/tags "HTTP/1.1 200 OK"
2024-09-16 15:42:07,863 - httpx - INFO - HTTP Request: GET http://10.0.1.3:11434/api/tags "HTTP/1.1 200 OK"
2024-09-16 15:42:08,067 - httpx - INFO - HTTP Request: GET http://10.0.1.3:11434/api/tags "HTTP/1.1 200 OK"
2024-09-16 15:43:03,958 - root - INFO - got prompt
2024-09-16 15:43:04,054 - root - ERROR - Failed to validate prompt for output 291:
2024-09-16 15:43:04,054 - root - ERROR - * FluxSamplerParams+ 55:
2024-09-16 15:43:04,054 - root - ERROR - - Required input is missing: conditioning
2024-09-16 15:43:04,054 - root - ERROR - Output will be ignored
2024-09-16 15:43:04,054 - root - ERROR - Failed to validate prompt for output 65:
2024-09-16 15:43:04,054 - root - ERROR - Output will be ignored
2024-09-16 15:43:04,055 - root - ERROR - Failed to validate prompt for output 308:
2024-09-16 15:43:04,055 - root - ERROR - Output will be ignored
2024-09-16 15:43:04,065 - root - WARNING - WARNING: object supporting the buffer API required
2024-09-16 15:43:04,491 - root - INFO - Using xformers attention in VAE
2024-09-16 15:43:04,492 - root - INFO - Using xformers attention in VAE
2024-09-16 15:43:06,056 - py.warnings - WARNING - E:\ComfyUI\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
2024-09-16 15:43:11,320 - root - WARNING - clip missing: ['text_projection.weight']
2024-09-16 15:43:12,781 - root - INFO - model weight dtype torch.bfloat16, manual cast: None
2024-09-16 15:43:12,781 - root - INFO - model_type FLUX
2024-09-16 15:43:32,000 - root - ERROR - !!! Exception during processing !!! FluxSamplerParams.execute() missing 1 required positional argument: 'conditioning'
2024-09-16 15:43:32,014 - root - ERROR - Traceback (most recent call last):
File "E:\ComfyUI\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: FluxSamplerParams.execute() missing 1 required positional argument: 'conditioning'
2024-09-16 15:43:32,022 - root - INFO - Prompt executed in 27.96 seconds
2024-09-16 15:43:36,394 - root - INFO - got prompt
2024-09-16 15:43:36,611 - root - WARNING - WARNING: object supporting the buffer API required
2024-09-16 15:43:39,318 - root - INFO - Requested to load FluxClipModel_
2024-09-16 15:43:39,318 - root - INFO - Loading 1 new model
2024-09-16 15:43:42,175 - root - INFO - loaded completely 0.0 9319.23095703125 True
2024-09-16 15:43:42,581 - root - INFO - Sampling 1/1 with seed 269226, sampler euler, scheduler beta, steps 20, guidance 3.5, max_shift 1.15, base_shift 0.5, denoise 1
2024-09-16 15:43:42,620 - root - INFO - Requested to load Flux
2024-09-16 15:43:42,621 - root - INFO - Loading 1 new model
2024-09-16 15:43:51,967 - root - INFO - loaded partially 21358.832 21358.652465820312 0
2024-09-16 15:44:08,251 - root - INFO - Requested to load AutoencodingEngine
2024-09-16 15:44:08,251 - root - INFO - Loading 1 new model
2024-09-16 15:44:09,429 - root - INFO - loaded completely 0.0 159.87335777282715 True
2024-09-16 15:44:11,572 - root - INFO - Requested to load FluxClipModel_
2024-09-16 15:44:11,572 - root - INFO - Loading 1 new model
2024-09-16 15:44:15,025 - root - INFO - loaded completely 0.0 9319.23095703125 True
2024-09-16 15:44:15,313 - root - INFO - Requested to load Flux
2024-09-16 15:44:15,313 - root - INFO - Loading 1 new model
2024-09-16 15:44:18,720 - root - INFO - loaded partially 15233.7859375 15233.537109375 0
2024-09-16 15:44:19,950 - root - ERROR - !!! Exception during processing !!! Allocation on device
2024-09-16 15:44:19,999 - root - ERROR - Traceback (most recent call last):
File "E:\ComfyUI\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 612, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 716, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 695, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 600, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 653, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 299, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 682, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 685, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\model_base.py", line 142, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\ldm\flux\model.py", line 159, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\ldm\flux\model.py", line 118, in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\comfy\ldm\flux\layers.py", line 166, in forward
torch.cat((txt_v, img_v), dim=2), pe=pe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: Allocation on device
2024-09-16 15:44:20,001 - root - ERROR - Got an OOM, unloading all loaded models.
2024-09-16 15:44:24,265 - root - INFO - Prompt executed in 47.65 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
Additional Context
(Please add any additional context or steps to reproduce the error here)
### Other
Workflow is below
not a bug, just Out Of Memory error
Interesting... usually it just spills over to RAM. Can you tell me why it runs fine with the tile nodes removed but not with them in the workflow? I only have a single model upload node...
Thank you.
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.