Why do I always run out of VRAM after updating to the latest version?
Custom Node Testing
- [ ] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Your question
Total timesteps: tensor([999, 937, 833, 624], device='cuda:0') Using timesteps: tensor([999, 937, 833, 624], device='cuda:0') Using sigmas: tensor([1.0000, 0.9375, 0.8333, 0.6249, 0.0000])
sigmas: tensor([1.0000, 0.9375, 0.8333, 0.6249, 0.0000])
WanAnimate: Ref masks torch.Size([4, 47, 100, 54]) padded to shape torch.Size([4, 60, 100, 54])
WanAnimate: BG images torch.Size([3, 186, 800, 432]) padded to shape torch.Size([3, 186, 800, 432])
Sampling 185 frames in 3 windows, at 432x800 with 4 steps
Frames 0-77: 0%| | 0/4 [00:00<?, ?it/s]Error during model prediction: CUDA error: out of memory
Search for cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA` to enable device-side assertions.
Error during sampling: CUDA error: out of memory
Search for cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception in thread Thread-19 (prompt_worker):
Traceback (most recent call last):
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "F:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 3071, in process
raise e
File "F:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 2860, in process
noise_pred, _, self.cache_state = predict_with_cfg(
~~~~~~~~~~~~~~~~^
latent_model_input, cfg[min(i, len(timesteps)-1)], positive, text_embeds["negative_prompt_embeds"],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
timestep, i, cache_state=self.cache_state, image_cond=image_cond_in, clip_fea=clip_fea, wananim_face_pixels=face_images_in,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
wananim_pose_latents=pose_input_slice, uni3c_data=uni3c_data_input,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "F:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 1563, in predict_with_cfg
raise e
File "F:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 1434, in predict_with_cfg
noise_pred_cond, noise_pred_ovi, cache_state_cond = transformer(
~~~~~~~~~~~^
context=positive_embeds,
^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**base_params
^^^^^^^^^^^^^
)
^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "F:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 2944, in forward
block.to(self.offload_device, non_blocking=self.use_non_blocking)
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1371, in to
return self._apply(convert)
~~~~~~~~~~~^^^^^^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 930, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 930, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 930, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 957, in _apply
param_applied = fn(param)
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1357, in convert
return t.to(
~~~~^
device,
^^^^^^^
dtype if t.is_floating_point() or t.is_complex() else None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
non_blocking,
^^^^^^^^^^^^^
)
^
torch.AcceleratorError: CUDA error: out of memory
Search for cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "threading.py", line 1043, in _bootstrap_inner
File "threading.py", line 994, in run
File "F:\ComfyUI\ComfyUI\ComfyUI\main.py", line 202, in prompt_worker
e.execute(item[2], prompt_id, extra_data, item[4])
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 664, in execute
asyncio.run(self.execute_async(prompt, prompt_id, extra_data, execute_outputs))
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "asyncio\runners.py", line 195, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 725, in run_until_complete
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 711, in execute_async
result, error, ex = await execute(self.server, dynamic_prompt, self.caches, node_id, extra_data, executed, prompt_id, execution_list, pending_subgraph_results, pending_async_nodes, ui_node_outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\ComfyUI\custom_nodes\comfyui-dev-utils\nodes\execution_time.py", line 83, in dev_utils_execute
result = await origin_execute(server, dynprompt, caches, current_item, extra_data, executed, prompt_id,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
execution_list, pending_subgraph_results, pending_async_nodes, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 588, in execute
input_data_formatted[name] = [format_value(x) for x in inputs]
~~~~~~~~~~~~^^^
File "F:\ComfyUI\ComfyUI\ComfyUI\execution.py", line 402, in format_value
return str(x)
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor.py", line 568, in repr
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor_str.py", line 722, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor_str.py", line 643, in _str_intern
tensor_str = _tensor_str(self, indent)
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor_str.py", line 375, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
~~~~~~~~~~~~~~~~~~~^^^^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor_str.py", line 413, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in self])
~~~~~~~~~~~~~~~~~~~^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor_str.py", line 411, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
~~~~~~~~~~~~~~~~~~~^^^
File "F:\ComfyUI\ComfyUI\python_embeded\Lib\site-packages\torch_tensor_str.py", line 401, in get_summarized_data
return torch.cat(
~~~~~~~~~^
(self[: PRINT_OPTS.edgeitems], self[-PRINT_OPTS.edgeitems :])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
torch.AcceleratorError: CUDA error: out of memory
Search for cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA` to enable device-side assertions.
Logs
Other
No response
Are you sure this didn't happen with previous version too? Anyway, there is a serious performance issue with latest version, the GUI has become incredibly sluggish no matter your hardware.
I cannot get Flux.2 to run at all. Always runs out of RAM. The blog makes it seem like It should run on a 4090. Should it? Does it?
64GB RAM and 4090 for me.
Thank you.
ComfyUI Error Report
Error Details
- Node ID: 13
- Node Type: SamplerCustomAdvanced
- Exception Type: torch.AcceleratorError
-
Exception Message: CUDA error: out of memory
Search for
cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile withTORCH_USE_CUDA_DSA` to enable device-side assertions.
Stack Trace
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "D:\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 835, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 990, in outer_sample
noise = noise.to(device)
System Information
- ComfyUI Version: 0.3.73
- Arguments: main.py
- OS: nt
- Python Version: 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.9.1+cu130
Devices
-
Name: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
- Type: cuda
- VRAM Total: 17102864384
- VRAM Free: 15802040320
- Torch VRAM Total: 0
- Torch VRAM Free: 0
Logs
2025-11-27T00:45:33.976448 - Adding extra search path checkpoints I:\models\checkpoints
2025-11-27T00:45:33.976666 - Adding extra search path loras I:\models\loras
2025-11-27T00:45:33.978330 - Adding extra search path vae I:\models\vae
2025-11-27T00:45:33.978855 - Adding extra search path configs I:\models\configs
2025-11-27T00:45:33.979361 - Adding extra search path unet I:\models\unet
2025-11-27T00:45:33.979874 - Adding extra search path clip I:\models\clip
2025-11-27T00:45:33.980458 - Adding extra search path clip_vision I:\models\clip_vision
2025-11-27T00:45:33.980917 - Adding extra search path controlnet I:\models\controlnet
2025-11-27T00:45:33.981417 - Adding extra search path gligen I:\models\gligen
2025-11-27T00:45:33.981939 - Adding extra search path style_models I:\models\style_models
2025-11-27T00:45:33.982444 - Adding extra search path diffusers I:\models\diffusers
2025-11-27T00:45:33.982958 - Adding extra search path embeddings I:\models\embeddings
2025-11-27T00:45:33.983460 - Adding extra search path hypernetworks I:\models\hypernetworks
2025-11-27T00:45:33.983985 - Adding extra search path upscale_models I:\models\upscale_models
2025-11-27T00:45:33.984487 - Adding extra search path vae_approx I:\models\vae_approx
2025-11-27T00:45:33.985001 - Adding extra search path ipadapter I:\models\ipadapter
2025-11-27T00:45:33.985502 - Adding extra search path insightface I:\models\insightface
2025-11-27T00:45:33.986015 - Adding extra search path facerestore_models I:\models\facerestore_models
2025-11-27T00:45:33.986517 - Adding extra search path facedetection I:\models\facedetection
2025-11-27T00:45:33.987041 - Adding extra search path face_parsing I:\models\face_parsing
2025-11-27T00:45:33.987543 - Adding extra search path pulid I:\models\pulid
2025-11-27T00:45:33.988066 - Adding extra search path florence2 I:\models\florence2
2025-11-27T00:45:33.988568 - Adding extra search path sams I:\models\sams
2025-11-27T00:45:33.989104 - Adding extra search path sam2 I:\models\sam2
2025-11-27T00:45:33.989608 - Adding extra search path ultralytics I:\models\ultralytics
2025-11-27T00:45:33.990122 - Adding extra search path yolo_world I:\models\yolo-world
2025-11-27T00:45:33.990630 - Adding extra search path grounding-dino I:\models\grounding-dino
2025-11-27T00:45:33.991136 - Adding extra search path photomaker I:\models\photomaker
2025-11-27T00:45:33.991640 - Adding extra search path animatediff_models I:\models\diffusion_models
2025-11-27T00:45:33.992151 - Adding extra search path animate_diff_motion_lora I:\models\loras
2025-11-27T00:45:33.992656 - Adding extra search path classifiers I:\models\classifiers
2025-11-27T00:45:33.995823 - Adding extra search path xlabs I:\models\xlabs
2025-11-27T00:45:34.634699 - [START] Security scan2025-11-27T00:45:34.634719 -
2025-11-27T00:45:36.274741 - [DONE] Security scan2025-11-27T00:45:36.274764 -
2025-11-27T00:45:36.442878 - ## ComfyUI-Manager: installing dependencies done.2025-11-27T00:45:36.443044 -
2025-11-27T00:45:36.443341 - ** ComfyUI startup time:2025-11-27T00:45:36.444007 - 2025-11-27T00:45:36.444108 - 2025-11-27 00:45:36.4432025-11-27T00:45:36.444214 -
2025-11-27T00:45:36.444325 - ** Platform:2025-11-27T00:45:36.444873 - 2025-11-27T00:45:36.444976 - Windows2025-11-27T00:45:36.445076 -
2025-11-27T00:45:36.445189 - ** Python version:2025-11-27T00:45:36.445695 - 2025-11-27T00:45:36.445785 - 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]2025-11-27T00:45:36.445881 -
2025-11-27T00:45:36.445984 - ** Python executable:2025-11-27T00:45:36.446530 - 2025-11-27T00:45:36.446625 - D:\ComfyUI_windows_portable\python_embeded\python.exe2025-11-27T00:45:36.446720 -
2025-11-27T00:45:36.446823 - ** ComfyUI Path:2025-11-27T00:45:36.447331 - 2025-11-27T00:45:36.447425 - D:\ComfyUI_windows_portable\ComfyUI2025-11-27T00:45:36.447520 -
2025-11-27T00:45:36.447629 - ** ComfyUI Base Folder Path:2025-11-27T00:45:36.448182 - 2025-11-27T00:45:36.448271 - D:\ComfyUI_windows_portable\ComfyUI2025-11-27T00:45:36.448363 -
2025-11-27T00:45:36.448466 - ** User directory:2025-11-27T00:45:36.450439 - 2025-11-27T00:45:36.450529 - D:\ComfyUI_windows_portable\ComfyUI\user2025-11-27T00:45:36.450621 -
2025-11-27T00:45:36.450722 - ** ComfyUI-Manager config path:2025-11-27T00:45:36.451219 - 2025-11-27T00:45:36.451460 - D:\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-11-27T00:45:36.451553 -
2025-11-27T00:45:36.451648 - ** Log path:2025-11-27T00:45:36.452126 - 2025-11-27T00:45:36.452205 - D:\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-11-27T00:45:36.452283 -
2025-11-27T00:45:37.979115 -
Prestartup times for custom nodes:
2025-11-27T00:45:37.979383 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2025-11-27T00:45:37.980902 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-easy-use
2025-11-27T00:45:37.982591 - 4.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-11-27T00:45:37.983103 -
2025-11-27T00:45:40.702631 - Checkpoint files will always be loaded safely.
2025-11-27T00:45:41.061788 - Total VRAM 16311 MB, total RAM 65419 MB
2025-11-27T00:45:41.062173 - pytorch version: 2.9.1+cu130
2025-11-27T00:45:41.064328 - Set vram state to: NORMAL_VRAM
2025-11-27T00:45:41.065326 - Device: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
2025-11-27T00:45:41.088263 - Enabled pinned memory 29438.0
2025-11-27T00:45:41.122430 - working around nvidia conv3d memory bug.
2025-11-27T00:45:42.876339 - Using pytorch attention
2025-11-27T00:45:46.022464 - Python version: 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
2025-11-27T00:45:46.022749 - ComfyUI version: 0.3.73
2025-11-27T00:45:46.071037 - ComfyUI frontend version: 1.30.6
2025-11-27T00:45:46.072984 - [Prompt Server] web root: D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
2025-11-27T00:45:47.068044 - Total VRAM 16311 MB, total RAM 65419 MB
2025-11-27T00:45:47.068344 - pytorch version: 2.9.1+cu130
2025-11-27T00:45:47.071166 - Set vram state to: NORMAL_VRAM
2025-11-27T00:45:47.071618 - Device: cuda:0 NVIDIA GeForce RTX 5060 Ti : cudaMallocAsync
2025-11-27T00:45:47.101731 - Enabled pinned memory 29438.0
2025-11-27T00:45:51.283695 - [34m[ComfyUI-Easy-Use] server: [0mv1.3.4 [92mLoaded[0m2025-11-27T00:45:51.283913 -
2025-11-27T00:45:51.284265 - [34m[ComfyUI-Easy-Use] web root: [0mD:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-easy-use\web_version/v2 [92mLoaded[0m2025-11-27T00:45:51.285158 -
2025-11-27T00:45:51.301662 - ComfyUI-GGUF: Allowing full torch compile
2025-11-27T00:45:51.341791 - ### Loading: ComfyUI-Manager (V3.37.1)
2025-11-27T00:45:51.342759 - [ComfyUI-Manager] network_mode: public
2025-11-27T00:45:51.470744 - ### ComfyUI Revision: 150 [0c18842a] *DETACHED | Released on '2025-11-25'
2025-11-27T00:45:51.537528 - [MultiGPU Core Patching] Patching mm.soft_empty_cache for Comprehensive Memory Management (VRAM + CPU + Store Pruning)
2025-11-27T00:45:51.557425 - [MultiGPU Core Patching] Patching mm.get_torch_device, mm.text_encoder_device, mm.unet_offload_device
2025-11-27T00:45:51.558422 - [MultiGPU DEBUG] Initial current_device: cuda:0
2025-11-27T00:45:51.559671 - [MultiGPU DEBUG] Initial current_text_encoder_device: cuda:0
2025-11-27T00:45:51.560967 - [MultiGPU DEBUG] Initial current_unet_offload_device: cpu
2025-11-27T00:45:51.582129 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-11-27T00:45:51.652003 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-11-27T00:45:51.698933 - [MultiGPU] Initiating custom_node Registration. . .
2025-11-27T00:45:51.704355 - -----------------------------------------------
2025-11-27T00:45:51.709538 - custom_node Found Nodes
2025-11-27T00:45:51.714989 - -----------------------------------------------
2025-11-27T00:45:51.725221 - ComfyUI-LTXVideo N 0
2025-11-27T00:45:51.730520 - ComfyUI-Florence2 N 0
2025-11-27T00:45:51.735743 - ComfyUI_bitsandbytes_NF4 N 0
2025-11-27T00:45:51.740942 - x-flux-comfyui N 0
2025-11-27T00:45:51.746290 - ComfyUI-MMAudio N 0
2025-11-27T00:45:51.751510 - ComfyUI-GGUF Y 18
2025-11-27T00:45:51.756616 - PuLID_ComfyUI N 0
2025-11-27T00:45:51.761875 - ComfyUI-WanVideoWrapper N 0
2025-11-27T00:45:51.767059 - -----------------------------------------------
2025-11-27T00:45:51.767614 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-11-27T00:45:51.777313 - [MultiGPU] Registration complete. Final mappings: CheckpointLoaderAdvancedMultiGPU, CheckpointLoaderAdvancedDisTorch2MultiGPU, UNetLoaderLP, UNETLoaderMultiGPU, VAELoaderMultiGPU, CLIPLoaderMultiGPU, DualCLIPLoaderMultiGPU, TripleCLIPLoaderMultiGPU, QuadrupleCLIPLoaderMultiGPU, CLIPVisionLoaderMultiGPU, CheckpointLoaderSimpleMultiGPU, ControlNetLoaderMultiGPU, DiffusersLoaderMultiGPU, DiffControlNetLoaderMultiGPU, UNETLoaderDisTorch2MultiGPU, VAELoaderDisTorch2MultiGPU, CLIPLoaderDisTorch2MultiGPU, DualCLIPLoaderDisTorch2MultiGPU, TripleCLIPLoaderDisTorch2MultiGPU, QuadrupleCLIPLoaderDisTorch2MultiGPU, CLIPVisionLoaderDisTorch2MultiGPU, CheckpointLoaderSimpleDisTorch2MultiGPU, ControlNetLoaderDisTorch2MultiGPU, DiffusersLoaderDisTorch2MultiGPU, DiffControlNetLoaderDisTorch2MultiGPU, UnetLoaderGGUFDisTorchMultiGPU, UnetLoaderGGUFAdvancedDisTorchMultiGPU, CLIPLoaderGGUFDisTorchMultiGPU, DualCLIPLoaderGGUFDisTorchMultiGPU, TripleCLIPLoaderGGUFDisTorchMultiGPU, QuadrupleCLIPLoaderGGUFDisTorchMultiGPU, UnetLoaderGGUFDisTorch2MultiGPU, UnetLoaderGGUFAdvancedDisTorch2MultiGPU, CLIPLoaderGGUFDisTorch2MultiGPU, DualCLIPLoaderGGUFDisTorch2MultiGPU, TripleCLIPLoaderGGUFDisTorch2MultiGPU, QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU, UnetLoaderGGUFMultiGPU, UnetLoaderGGUFAdvancedMultiGPU, CLIPLoaderGGUFMultiGPU, DualCLIPLoaderGGUFMultiGPU, TripleCLIPLoaderGGUFMultiGPU, QuadrupleCLIPLoaderGGUFMultiGPU
2025-11-27T00:45:51.859082 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-11-27T00:45:51.865130 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-11-27T00:45:51.910255 -
2025-11-27T00:45:51.910498 - [92m[rgthree-comfy] Loaded 48 epic nodes. 🎉[0m2025-11-27T00:45:52.523474 -
2025-11-27T00:45:52.524271 -
2025-11-27T00:45:52.530326 -
Import times for custom nodes:
2025-11-27T00:45:52.530523 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-11-27T00:45:52.533212 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
2025-11-27T00:45:52.533741 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Hybrid-Scaled_fp8-Loader
2025-11-27T00:45:52.534868 - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-kjnodes
2025-11-27T00:45:52.535378 - 0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\gguf
2025-11-27T00:45:52.535884 - 0.2 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-11-27T00:45:52.536393 - 0.3 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-multigpu
2025-11-27T00:45:52.537209 - 0.7 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2025-11-27T00:45:52.538744 - 3.3 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-easy-use
2025-11-27T00:45:52.539798 -
2025-11-27T00:45:53.035119 - Context impl SQLiteImpl.
2025-11-27T00:45:53.035411 - Will assume non-transactional DDL.
2025-11-27T00:45:53.038029 - No target revision found.
2025-11-27T00:45:53.128720 - Starting server
2025-11-27T00:45:53.129683 - To see the GUI go to: http://127.0.0.1:8188
2025-11-27T00:45:55.892625 - FETCH ComfyRegistry Data: 5/1082025-11-27T00:45:55.892869 -
2025-11-27T00:45:59.354437 - FETCH ComfyRegistry Data: 10/1082025-11-27T00:45:59.354707 -
2025-11-27T00:46:00.759112 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-27T00:46:00.760036 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-27T00:46:01.535247 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-27T00:46:01.541565 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-27T00:46:02.823992 - FETCH ComfyRegistry Data: 15/1082025-11-27T00:46:02.824277 -
2025-11-27T00:46:06.574723 - FETCH ComfyRegistry Data: 20/1082025-11-27T00:46:06.574967 -
2025-11-27T00:46:10.392158 - FETCH ComfyRegistry Data: 25/1082025-11-27T00:46:10.392421 -
2025-11-27T00:46:14.161509 - FETCH ComfyRegistry Data: 30/1082025-11-27T00:46:14.161798 -
2025-11-27T00:46:17.631263 - FETCH ComfyRegistry Data: 35/1082025-11-27T00:46:17.631498 -
2025-11-27T00:46:21.150733 - FETCH ComfyRegistry Data: 40/1082025-11-27T00:46:21.150972 -
2025-11-27T00:46:24.615120 - FETCH ComfyRegistry Data: 45/1082025-11-27T00:46:24.615420 -
2025-11-27T00:46:28.083380 - FETCH ComfyRegistry Data: 50/1082025-11-27T00:46:28.083642 -
2025-11-27T00:46:31.533745 - FETCH ComfyRegistry Data: 55/1082025-11-27T00:46:31.533985 -
2025-11-27T00:46:35.018516 - FETCH ComfyRegistry Data: 60/1082025-11-27T00:46:35.018769 -
2025-11-27T00:46:38.449178 - FETCH ComfyRegistry Data: 65/1082025-11-27T00:46:38.449395 -
2025-11-27T00:46:42.203755 - FETCH ComfyRegistry Data: 70/1082025-11-27T00:46:42.204022 -
2025-11-27T00:46:46.488300 - FETCH ComfyRegistry Data: 75/1082025-11-27T00:46:46.488533 -
2025-11-27T00:46:49.967895 - FETCH ComfyRegistry Data: 80/1082025-11-27T00:46:49.968128 -
2025-11-27T00:46:53.436967 - FETCH ComfyRegistry Data: 85/1082025-11-27T00:46:53.437213 -
2025-11-27T00:46:58.012462 - FETCH ComfyRegistry Data: 90/1082025-11-27T00:46:58.012693 -
2025-11-27T00:47:01.468711 - FETCH ComfyRegistry Data: 95/1082025-11-27T00:47:01.468946 -
2025-11-27T00:47:04.929822 - FETCH ComfyRegistry Data: 100/1082025-11-27T00:47:04.930062 -
2025-11-27T00:47:08.432167 - FETCH ComfyRegistry Data: 105/1082025-11-27T00:47:08.432396 -
2025-11-27T00:47:11.012637 - FETCH ComfyRegistry Data [DONE]2025-11-27T00:47:11.012825 -
2025-11-27T00:47:11.293599 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-11-27T00:47:11.327447 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-11-27T00:47:11.327550 - 2025-11-27T00:47:11.420290 - [DONE]2025-11-27T00:47:11.420555 -
2025-11-27T00:47:11.544175 - [ComfyUI-Manager] All startup tasks have been completed.
2025-11-27T00:49:02.476075 - got prompt
2025-11-27T00:49:02.477807 - Failed to validate prompt for output 9:
2025-11-27T00:49:02.478072 - * (prompt):
2025-11-27T00:49:02.478674 - - Required input is missing: images
2025-11-27T00:49:02.479431 - * SaveImage 9:
2025-11-27T00:49:02.480188 - - Required input is missing: images
2025-11-27T00:49:02.480725 - Output will be ignored
2025-11-27T00:49:02.481299 - Failed to validate prompt for output 71:
2025-11-27T00:49:02.481884 - * (prompt):
2025-11-27T00:49:02.482433 - - Required input is missing: images
2025-11-27T00:49:02.483155 - * SaveImage 71:
2025-11-27T00:49:02.483700 - - Required input is missing: images
2025-11-27T00:49:02.484667 - Output will be ignored
2025-11-27T00:49:02.536509 - Using pytorch attention in VAE
2025-11-27T00:49:02.538841 - Using pytorch attention in VAE
2025-11-27T00:49:02.815753 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-27T00:49:03.863272 - Requested to load AutoencoderKL
2025-11-27T00:49:04.012132 - loaded completely; 10836.00 MB usable, 160.31 MB loaded, full load: True
2025-11-27T00:49:04.853244 - !!! Exception during processing !!! Unexpected text model architecture type in GGUF file: 'cow'
2025-11-27T00:49:04.858948 - Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-multigpu\wrappers.py", line 538, in override
out = fn(*args, **kwargs)
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-multigpu\nodes.py", line 81, in load_clip
return original_loader.load_clip(clip_name, type)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 244, in load_clip
return (self.load_patcher([clip_path], clip_type, self.load_data([clip_path])),)
~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 220, in load_data
sd = gguf_clip_loader(p)
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py", line 331, in gguf_clip_loader
sd, arch = gguf_sd_loader(path, return_arch=True, is_text_model=True)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py", line 89, in gguf_sd_loader
raise ValueError(f"Unexpected text model architecture type in GGUF file: {arch_str!r}")
ValueError: Unexpected text model architecture type in GGUF file: 'cow'
2025-11-27T00:49:04.864190 - Prompt executed in 2.37 seconds
2025-11-27T00:52:18.956410 - got prompt
2025-11-27T00:52:18.957896 - Failed to validate prompt for output 9:
2025-11-27T00:52:18.960295 - * (prompt):
2025-11-27T00:52:18.961026 - - Required input is missing: images
2025-11-27T00:52:18.961595 - * SaveImage 9:
2025-11-27T00:52:18.962478 - - Required input is missing: images
2025-11-27T00:52:18.963058 - Output will be ignored
2025-11-27T00:52:18.963634 - Failed to validate prompt for output 71:
2025-11-27T00:52:18.964190 - * (prompt):
2025-11-27T00:52:18.964731 - - Required input is missing: images
2025-11-27T00:52:18.965351 - * SaveImage 71:
2025-11-27T00:52:18.965909 - - Required input is missing: images
2025-11-27T00:52:18.966474 - Output will be ignored
2025-11-27T00:52:19.035139 - [MultiGPU Core Patching] text_encoder_device_patched returning device: cuda:1 (current_text_encoder_device=cuda:1)
2025-11-27T00:52:19.266817 - Using MixedPrecisionOps for text encoder: 210 quantized layers
2025-11-27T00:52:25.602852 - CLIP/text encoder model load device: cuda:1, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-27T00:52:28.968739 - Requested to load Flux2TEModel_
2025-11-27T00:52:43.519829 - loaded partially; 5646.80 MB usable, 5645.59 MB loaded, 11535.00 MB offloaded, lowvram patches: 0
2025-11-27T00:52:48.140108 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-27T00:52:48.231677 - model weight dtype torch.bfloat16, manual cast: None
2025-11-27T00:52:48.232619 - model_type FLUX
2025-11-27T00:54:35.382754 - Requested to load Flux2
2025-11-27T00:54:35.652105 - 0 models unloaded.
2025-11-27T00:54:38.744951 - loaded partially; 128.00 MB usable, 112.53 MB loaded, 30618.00 MB offloaded, lowvram patches: 0
2025-11-27T00:54:38.900717 - !!! Exception during processing !!! CUDA error: out of memory
Search for `cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-27T00:54:38.908679 - Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "D:\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 835, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 990, in outer_sample
noise = noise.to(device)
torch.AcceleratorError: CUDA error: out of memory
Search for `cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-27T00:54:38.921237 - Prompt executed in 139.95 seconds
2025-11-27T00:54:40.357129 - Exception in thread 2025-11-27T00:54:40.357408 - Thread-9 (prompt_worker)2025-11-27T00:54:40.357866 - :
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
Additional Context
(Please add any additional context or steps to reproduce the error here)
I cannot get Flux.2 to run at all. Always runs out of RAM. The blog makes it seem like It should run on a 4090. Should it? Does it?
64GB RAM and 4090 for me.
Thank you.
Got it running... not very stable but I can at least get 5 gens before it crashes.
Had to update to the very latest NVIDIA GeForce Drivers
Same here.
Since the updates yesterday (02.12.2025) when before I could comfortably create 1280x1280 i2v videos on my 4090, first I got only a green square trying to rotate on the frontend when loading, then after a few debugging hours I did a completely new install and allright it loaded again and did the simplest rendering jobs, but without any custom nodes.
Then I started a also very simple i2v wan 2.2 painter workflow wanting to recreate a 1280x1280 video again that was never a problem until yesterday. - Now it eats up my VRAM (24G) completely and crashes. - 1024x1024 does still work.
The partly incompatibility with a lot of existing custom nodes, even if they do not use any frontend code, is a big mess. I do understand decisions for technical design changes and I am sure the end result will be great, but imho. they should be tested better before rolled out for everyone and backwards compatibility, at least of the backend code should be kept, if even the fallback to the older nodes creates such big problems with pretty simple workflows.
I get horror visions when I think about all the work I have to do now to get all of my workflows to at least work again with the v1 nodes.
Btw.: I'm on ComfyUI version: 0.3.76, ComfyUI frontend version: 1.33.10 now
I'm on a 5090 and getting KSampleAdvanced : CUDA error: out of memory.
I'm having the same problem! I thought it was just me! It turns out everyone has this issue!! it looks like a compatibility bug?