KSampler Advanced CUDA error: out of memory....ON AN RTX 5090!!!
Custom Node Testing
- [x] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
A workflow rujns
Actual Behavior
ComfyUI Error Report
Error Details
- Node ID: 84
- Node Type: KSamplerAdvanced
- Exception Type: torch.AcceleratorError
-
Exception Message: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with
TORCH_USE_CUDA_DSAto enable device-side assertions.
Stack Trace
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1569, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1502, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 671, in load_models_gpu
free_memory(total_memory_required[device] * 1.1 + extra_mem, device)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 603, in free_memory
if current_loaded_models[i].model_unload(memory_to_free):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 526, in model_unload
freed = self.model.partially_unload(self.model.offload_device, memory_to_free)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 913, in partially_unload
m.to(device_to)
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
System Information
- ComfyUI Version: 0.3.76
- Arguments: C:\ComfyUI\resources\ComfyUI\main.py --user-directory C:\ComfyUIx2\user --input-directory C:\ComfyUIx2\input --output-directory C:\ComfyUIx2\output --front-end-root C:\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\ComfyUIx2 --extra-model-paths-config C:\Users\wapwr\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000
- OS: nt
- Python Version: 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
- Embedded Python: false
- PyTorch Version: 2.8.0+cu129
Devices
-
Name: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
- Type: cuda
- VRAM Total: 34190458880
- VRAM Free: 32429309952
- Torch VRAM Total: 0
- Torch VRAM Free: 0
Logs
2025-12-08T20:52:51.019405 - Adding extra search path custom_nodes C:\ComfyUIx2\custom_nodes
2025-12-08T20:52:51.019405 - Adding extra search path download_model_base C:\ComfyUIx2\models
2025-12-08T20:52:51.019405 - Adding extra search path custom_nodes C:\ComfyUI\resources\ComfyUI\custom_nodes
2025-12-08T20:52:51.019405 - Setting output directory to: C:\ComfyUIx2\output
2025-12-08T20:52:51.019405 - Setting input directory to: C:\ComfyUIx2\input
2025-12-08T20:52:51.019405 - Setting user directory to: C:\ComfyUIx2\user
2025-12-08T20:52:51.515374 - [START] Security scan2025-12-08T20:52:51.515374 -
2025-12-08T20:52:52.139259 - [DONE] Security scan2025-12-08T20:52:52.139259 -
2025-12-08T20:52:52.240020 - ## ComfyUI-Manager: installing dependencies done.2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI startup time:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - 2025-12-08 20:52:52.2402025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Platform:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - Windows2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Python version:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Python executable:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\.venv\Scripts\python.exe2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI Path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUI\resources\ComfyUI2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI Base Folder Path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUI\resources\ComfyUI2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** User directory:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\user2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI-Manager config path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\user\default\ComfyUI-Manager\config.ini2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Log path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\user\comfyui.log2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.851348 - [ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated.
2025-12-08T20:52:52.851348 -
Prestartup times for custom nodes:
2025-12-08T20:52:52.851348 - 1.8 seconds: C:\ComfyUI\resources\ComfyUI\custom_nodes\ComfyUI-Manager
2025-12-08T20:52:52.851348 -
2025-12-08T20:52:54.038478 - Checkpoint files will always be loaded safely.
2025-12-08T20:52:54.126976 - Total VRAM 32607 MB, total RAM 64792 MB
2025-12-08T20:52:54.126976 - pytorch version: 2.8.0+cu129
2025-12-08T20:52:54.126976 - Set vram state to: NORMAL_VRAM
2025-12-08T20:52:54.126976 - Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
2025-12-08T20:52:54.138251 - Using async weight offloading with 2 streams
2025-12-08T20:52:54.138251 - Enabled pinned memory 29156.0
2025-12-08T20:52:55.081017 - Using pytorch attention
2025-12-08T20:52:55.284938 - C:\ComfyUIx2\.venv\Lib\site-packages\transformers\utils\hub.py:110: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
warnings.warn(
2025-12-08T20:52:56.682894 - Python version: 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
2025-12-08T20:52:56.682894 - ComfyUI version: 0.3.76
2025-12-08T20:52:56.707407 - [Prompt Server] web root: C:\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app
2025-12-08T20:52:57.211304 - Total VRAM 32607 MB, total RAM 64792 MB
2025-12-08T20:52:57.211304 - pytorch version: 2.8.0+cu129
2025-12-08T20:52:57.211304 - Set vram state to: NORMAL_VRAM
2025-12-08T20:52:57.211304 - Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
2025-12-08T20:52:57.232506 - Using async weight offloading with 2 streams
2025-12-08T20:52:57.234020 - Enabled pinned memory 29156.0
2025-12-08T20:52:57.538433 - ### Loading: ComfyUI-Manager (V3.36)
2025-12-08T20:52:57.538433 - [ComfyUI-Manager] network_mode: public
2025-12-08T20:52:57.538433 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)
2025-12-08T20:52:57.545971 -
Import times for custom nodes:
2025-12-08T20:52:57.545971 - 0.0 seconds: C:\ComfyUI\resources\ComfyUI\custom_nodes\websocket_image_save.py
2025-12-08T20:52:57.545971 - 0.0 seconds: C:\ComfyUI\resources\ComfyUI\custom_nodes\ComfyUI-Manager
2025-12-08T20:52:57.545971 -
2025-12-08T20:52:57.628122 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-12-08T20:52:57.635753 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-12-08T20:52:57.664928 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-12-08T20:52:57.695492 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-12-08T20:52:57.846434 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-12-08T20:52:57.958670 - Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/20/e3q8)
2025-12-08T20:52:57.981638 - Starting server
2025-12-08T20:52:57.981638 - To see the GUI go to: http://127.0.0.1:8000
2025-12-08T20:52:59.363306 - comfyui-frontend-package not found in requirements.txt
2025-12-08T20:52:59.457330 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:52:59.460354 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:52:59.550009 - comfyui-frontend-package not found in requirements.txt
2025-12-08T20:52:59.638743 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:52:59.640343 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:53:02.137466 - FETCH ComfyRegistry Data: 5/1102025-12-08T20:53:02.137466 -
2025-12-08T20:53:06.415744 - FETCH ComfyRegistry Data: 10/1102025-12-08T20:53:06.415744 -
2025-12-08T20:53:11.425242 - FETCH ComfyRegistry Data: 15/1102025-12-08T20:53:11.425242 -
2025-12-08T20:53:15.704541 - FETCH ComfyRegistry Data: 20/1102025-12-08T20:53:15.704541 -
2025-12-08T20:53:20.712386 - FETCH ComfyRegistry Data: 25/1102025-12-08T20:53:20.712386 -
2025-12-08T20:53:24.952698 - FETCH ComfyRegistry Data: 30/1102025-12-08T20:53:24.952698 -
2025-12-08T20:53:29.489074 - FETCH ComfyRegistry Data: 35/1102025-12-08T20:53:29.489074 -
2025-12-08T20:53:33.757124 - FETCH ComfyRegistry Data: 40/1102025-12-08T20:53:33.757124 -
2025-12-08T20:53:38.046624 - FETCH ComfyRegistry Data: 45/1102025-12-08T20:53:38.046624 -
2025-12-08T20:53:42.370018 - FETCH ComfyRegistry Data: 50/1102025-12-08T20:53:42.370018 -
2025-12-08T20:53:46.621691 - FETCH ComfyRegistry Data: 55/1102025-12-08T20:53:46.621691 -
2025-12-08T20:53:50.937854 - FETCH ComfyRegistry Data: 60/1102025-12-08T20:53:50.937854 -
2025-12-08T20:53:55.253707 - FETCH ComfyRegistry Data: 65/1102025-12-08T20:53:55.253707 -
2025-12-08T20:54:00.043422 - FETCH ComfyRegistry Data: 70/1102025-12-08T20:54:00.043422 -
2025-12-08T20:54:04.340593 - FETCH ComfyRegistry Data: 75/1102025-12-08T20:54:04.340593 -
2025-12-08T20:54:08.626063 - FETCH ComfyRegistry Data: 80/1102025-12-08T20:54:08.626063 -
2025-12-08T20:54:12.935997 - FETCH ComfyRegistry Data: 85/1102025-12-08T20:54:12.937534 -
2025-12-08T20:54:17.257865 - FETCH ComfyRegistry Data: 90/1102025-12-08T20:54:17.257865 -
2025-12-08T20:54:21.606178 - FETCH ComfyRegistry Data: 95/1102025-12-08T20:54:21.606178 -
2025-12-08T20:54:26.615541 - FETCH ComfyRegistry Data: 100/1102025-12-08T20:54:26.615541 -
2025-12-08T20:54:30.933644 - FETCH ComfyRegistry Data: 105/1102025-12-08T20:54:30.933644 -
2025-12-08T20:54:35.371812 - FETCH ComfyRegistry Data: 110/1102025-12-08T20:54:35.371812 -
2025-12-08T20:54:35.873046 - FETCH ComfyRegistry Data [DONE]2025-12-08T20:54:35.873046 -
2025-12-08T20:54:36.012151 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-12-08T20:54:36.025977 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-12-08T20:54:36.027509 - 2025-12-08T20:54:36.151479 - [DONE]2025-12-08T20:54:36.151479 -
2025-12-08T20:54:36.180662 - [ComfyUI-Manager] broken item:{'author': 'rjgoif', 'title': 'Img Label Tools', 'id': 'Img-Label-Tools', 'reference': 'https://github.com/rjgoif/ComfyUI-Img-Label-Tools', 'install_type': 'git-clone', 'description': 'Tools to help annotate images for sharing on Reddit, Discord, etc.'}
2025-12-08T20:54:36.197530 - [ComfyUI-Manager] All startup tasks have been completed.
2025-12-08T20:56:55.742245 - got prompt
2025-12-08T20:56:55.774352 - Using pytorch attention in VAE
2025-12-08T20:56:55.774352 - Using pytorch attention in VAE
2025-12-08T20:56:55.962938 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-12-08T20:56:56.106020 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-12-08T20:56:56.522519 - Requested to load WanTEModel
2025-12-08T20:56:56.527102 - loaded completely; 95367431640625005117571072.00 MB usable, 6419.48 MB loaded, full load: True
2025-12-08T20:56:56.528626 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
2025-12-08T20:57:11.515113 - Requested to load WanVAE
2025-12-08T20:57:11.560672 - loaded completely; 22620.71 MB usable, 242.03 MB loaded, full load: True
2025-12-08T20:57:12.300382 - Using scaled fp8: fp8 matrix mult: True, scale input: True
2025-12-08T20:57:12.330562 - model weight dtype torch.float16, manual cast: None
2025-12-08T20:57:12.330562 - model_type FLOW
2025-12-08T20:57:18.144820 - Requested to load WAN21
2025-12-08T20:57:21.906841 - loaded completely; 22037.77 MB usable, 13629.08 MB loaded, full load: True
2025-12-08T20:57:42.012192 -
100%|██████████| 10/10 [00:20<00:00, 2.03s/it]2025-12-08T20:57:42.012192 -
100%|██████████| 10/10 [00:20<00:00, 2.01s/it]2025-12-08T20:57:42.012192 -
2025-12-08T20:57:42.479600 - Using scaled fp8: fp8 matrix mult: True, scale input: True
2025-12-08T20:57:42.522285 - model weight dtype torch.float16, manual cast: None
2025-12-08T20:57:42.522285 - model_type FLOW
2025-12-08T20:57:47.690501 - Requested to load WAN21
2025-12-08T20:57:50.008726 - Unloaded partially: 1402.02 MB freed, 12229.43 MB remains loaded, 75.01 MB buffer reserved, lowvram patches: 0
2025-12-08T20:57:53.796691 - loaded completely; 16469.85 MB usable, 13629.08 MB loaded, full load: True
2025-12-08T20:58:14.009170 -
100%|██████████| 10/10 [00:20<00:00, 2.04s/it]2025-12-08T20:58:14.009170 -
100%|██████████| 10/10 [00:20<00:00, 2.02s/it]2025-12-08T20:58:14.010729 -
2025-12-08T20:58:14.184338 - Requested to load WanVAE
2025-12-08T20:58:14.258122 - loaded completely; 2137.87 MB usable, 242.03 MB loaded, full load: True
2025-12-08T20:58:15.424459 - Prompt executed in 79.68 seconds
2025-12-08T20:58:15.603551 - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\wapwr\AppData\Roaming\uv\python\cpython-3.12.9-windows-x86_64-none\Lib\asyncio\events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\wapwr\AppData\Roaming\uv\python\cpython-3.12.9-windows-x86_64-none\Lib\asyncio\proactor_events.py", line 165, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
2025-12-08T20:59:11.415751 - got prompt
2025-12-08T20:59:13.881560 - !!! Exception during processing !!! CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-12-08T20:59:13.884583 - Traceback (most recent call last):
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1569, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1502, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 671, in load_models_gpu
free_memory(total_memory_required[device] * 1.1 + extra_mem, device)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 603, in free_memory
if current_loaded_models[i].model_unload(memory_to_free):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 526, in model_unload
freed = self.model.partially_unload(self.model.offload_device, memory_to_free)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 913, in partially_unload
m.to(device_to)
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-12-08T20:59:13.886133 - Prompt executed in 2.47 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
Additional Context
(Please add any additional context or steps to reproduce the error here)
Steps to Reproduce
Just running a workflow
Debug Logs
# ComfyUI Error Report
## Error Details
- **Node ID:** 84
- **Node Type:** KSamplerAdvanced
- **Exception Type:** torch.AcceleratorError
- **Exception Message:** CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
## Stack Trace
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1569, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1502, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 671, in load_models_gpu
free_memory(total_memory_required[device] * 1.1 + extra_mem, device)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 603, in free_memory
if current_loaded_models[i].model_unload(memory_to_free):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 526, in model_unload
freed = self.model.partially_unload(self.model.offload_device, memory_to_free)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 913, in partially_unload
m.to(device_to)
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
## System Information
- **ComfyUI Version:** 0.3.76
- **Arguments:** C:\ComfyUI\resources\ComfyUI\main.py --user-directory C:\ComfyUIx2\user --input-directory C:\ComfyUIx2\input --output-directory C:\ComfyUIx2\output --front-end-root C:\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\ComfyUIx2 --extra-model-paths-config C:\Users\wapwr\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000
- **OS:** nt
- **Python Version:** 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.8.0+cu129
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 34190458880
- **VRAM Free:** 32429309952
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-12-08T20:52:51.019405 - Adding extra search path custom_nodes C:\ComfyUIx2\custom_nodes
2025-12-08T20:52:51.019405 - Adding extra search path download_model_base C:\ComfyUIx2\models
2025-12-08T20:52:51.019405 - Adding extra search path custom_nodes C:\ComfyUI\resources\ComfyUI\custom_nodes
2025-12-08T20:52:51.019405 - Setting output directory to: C:\ComfyUIx2\output
2025-12-08T20:52:51.019405 - Setting input directory to: C:\ComfyUIx2\input
2025-12-08T20:52:51.019405 - Setting user directory to: C:\ComfyUIx2\user
2025-12-08T20:52:51.515374 - [START] Security scan2025-12-08T20:52:51.515374 -
2025-12-08T20:52:52.139259 - [DONE] Security scan2025-12-08T20:52:52.139259 -
2025-12-08T20:52:52.240020 - ## ComfyUI-Manager: installing dependencies done.2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI startup time:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - 2025-12-08 20:52:52.2402025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Platform:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - Windows2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Python version:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Python executable:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\.venv\Scripts\python.exe2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI Path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUI\resources\ComfyUI2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI Base Folder Path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUI\resources\ComfyUI2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** User directory:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\user2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** ComfyUI-Manager config path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\user\default\ComfyUI-Manager\config.ini2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.240020 - ** Log path:2025-12-08T20:52:52.240020 - 2025-12-08T20:52:52.240020 - C:\ComfyUIx2\user\comfyui.log2025-12-08T20:52:52.240020 -
2025-12-08T20:52:52.851348 - [ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated.
2025-12-08T20:52:52.851348 -
Prestartup times for custom nodes:
2025-12-08T20:52:52.851348 - 1.8 seconds: C:\ComfyUI\resources\ComfyUI\custom_nodes\ComfyUI-Manager
2025-12-08T20:52:52.851348 -
2025-12-08T20:52:54.038478 - Checkpoint files will always be loaded safely.
2025-12-08T20:52:54.126976 - Total VRAM 32607 MB, total RAM 64792 MB
2025-12-08T20:52:54.126976 - pytorch version: 2.8.0+cu129
2025-12-08T20:52:54.126976 - Set vram state to: NORMAL_VRAM
2025-12-08T20:52:54.126976 - Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
2025-12-08T20:52:54.138251 - Using async weight offloading with 2 streams
2025-12-08T20:52:54.138251 - Enabled pinned memory 29156.0
2025-12-08T20:52:55.081017 - Using pytorch attention
2025-12-08T20:52:55.284938 - C:\ComfyUIx2\.venv\Lib\site-packages\transformers\utils\hub.py:110: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
warnings.warn(
2025-12-08T20:52:56.682894 - Python version: 3.12.9 (main, Feb 12 2025, 14:52:31) [MSC v.1942 64 bit (AMD64)]
2025-12-08T20:52:56.682894 - ComfyUI version: 0.3.76
2025-12-08T20:52:56.707407 - [Prompt Server] web root: C:\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app
2025-12-08T20:52:57.211304 - Total VRAM 32607 MB, total RAM 64792 MB
2025-12-08T20:52:57.211304 - pytorch version: 2.8.0+cu129
2025-12-08T20:52:57.211304 - Set vram state to: NORMAL_VRAM
2025-12-08T20:52:57.211304 - Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
2025-12-08T20:52:57.232506 - Using async weight offloading with 2 streams
2025-12-08T20:52:57.234020 - Enabled pinned memory 29156.0
2025-12-08T20:52:57.538433 - ### Loading: ComfyUI-Manager (V3.36)
2025-12-08T20:52:57.538433 - [ComfyUI-Manager] network_mode: public
2025-12-08T20:52:57.538433 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)
2025-12-08T20:52:57.545971 -
Import times for custom nodes:
2025-12-08T20:52:57.545971 - 0.0 seconds: C:\ComfyUI\resources\ComfyUI\custom_nodes\websocket_image_save.py
2025-12-08T20:52:57.545971 - 0.0 seconds: C:\ComfyUI\resources\ComfyUI\custom_nodes\ComfyUI-Manager
2025-12-08T20:52:57.545971 -
2025-12-08T20:52:57.628122 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-12-08T20:52:57.635753 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-12-08T20:52:57.664928 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-12-08T20:52:57.695492 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-12-08T20:52:57.846434 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-12-08T20:52:57.958670 - Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/20/e3q8)
2025-12-08T20:52:57.981638 - Starting server
2025-12-08T20:52:57.981638 - To see the GUI go to: http://127.0.0.1:8000
2025-12-08T20:52:59.363306 - comfyui-frontend-package not found in requirements.txt
2025-12-08T20:52:59.457330 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:52:59.460354 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:52:59.550009 - comfyui-frontend-package not found in requirements.txt
2025-12-08T20:52:59.638743 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:52:59.640343 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-12-08T20:53:02.137466 - FETCH ComfyRegistry Data: 5/1102025-12-08T20:53:02.137466 -
2025-12-08T20:53:06.415744 - FETCH ComfyRegistry Data: 10/1102025-12-08T20:53:06.415744 -
2025-12-08T20:53:11.425242 - FETCH ComfyRegistry Data: 15/1102025-12-08T20:53:11.425242 -
2025-12-08T20:53:15.704541 - FETCH ComfyRegistry Data: 20/1102025-12-08T20:53:15.704541 -
2025-12-08T20:53:20.712386 - FETCH ComfyRegistry Data: 25/1102025-12-08T20:53:20.712386 -
2025-12-08T20:53:24.952698 - FETCH ComfyRegistry Data: 30/1102025-12-08T20:53:24.952698 -
2025-12-08T20:53:29.489074 - FETCH ComfyRegistry Data: 35/1102025-12-08T20:53:29.489074 -
2025-12-08T20:53:33.757124 - FETCH ComfyRegistry Data: 40/1102025-12-08T20:53:33.757124 -
2025-12-08T20:53:38.046624 - FETCH ComfyRegistry Data: 45/1102025-12-08T20:53:38.046624 -
2025-12-08T20:53:42.370018 - FETCH ComfyRegistry Data: 50/1102025-12-08T20:53:42.370018 -
2025-12-08T20:53:46.621691 - FETCH ComfyRegistry Data: 55/1102025-12-08T20:53:46.621691 -
2025-12-08T20:53:50.937854 - FETCH ComfyRegistry Data: 60/1102025-12-08T20:53:50.937854 -
2025-12-08T20:53:55.253707 - FETCH ComfyRegistry Data: 65/1102025-12-08T20:53:55.253707 -
2025-12-08T20:54:00.043422 - FETCH ComfyRegistry Data: 70/1102025-12-08T20:54:00.043422 -
2025-12-08T20:54:04.340593 - FETCH ComfyRegistry Data: 75/1102025-12-08T20:54:04.340593 -
2025-12-08T20:54:08.626063 - FETCH ComfyRegistry Data: 80/1102025-12-08T20:54:08.626063 -
2025-12-08T20:54:12.935997 - FETCH ComfyRegistry Data: 85/1102025-12-08T20:54:12.937534 -
2025-12-08T20:54:17.257865 - FETCH ComfyRegistry Data: 90/1102025-12-08T20:54:17.257865 -
2025-12-08T20:54:21.606178 - FETCH ComfyRegistry Data: 95/1102025-12-08T20:54:21.606178 -
2025-12-08T20:54:26.615541 - FETCH ComfyRegistry Data: 100/1102025-12-08T20:54:26.615541 -
2025-12-08T20:54:30.933644 - FETCH ComfyRegistry Data: 105/1102025-12-08T20:54:30.933644 -
2025-12-08T20:54:35.371812 - FETCH ComfyRegistry Data: 110/1102025-12-08T20:54:35.371812 -
2025-12-08T20:54:35.873046 - FETCH ComfyRegistry Data [DONE]2025-12-08T20:54:35.873046 -
2025-12-08T20:54:36.012151 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-12-08T20:54:36.025977 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-12-08T20:54:36.027509 - 2025-12-08T20:54:36.151479 - [DONE]2025-12-08T20:54:36.151479 -
2025-12-08T20:54:36.180662 - [ComfyUI-Manager] broken item:{'author': 'rjgoif', 'title': 'Img Label Tools', 'id': 'Img-Label-Tools', 'reference': 'https://github.com/rjgoif/ComfyUI-Img-Label-Tools', 'install_type': 'git-clone', 'description': 'Tools to help annotate images for sharing on Reddit, Discord, etc.'}
2025-12-08T20:54:36.197530 - [ComfyUI-Manager] All startup tasks have been completed.
2025-12-08T20:56:55.742245 - got prompt
2025-12-08T20:56:55.774352 - Using pytorch attention in VAE
2025-12-08T20:56:55.774352 - Using pytorch attention in VAE
2025-12-08T20:56:55.962938 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-12-08T20:56:56.106020 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-12-08T20:56:56.522519 - Requested to load WanTEModel
2025-12-08T20:56:56.527102 - loaded completely; 95367431640625005117571072.00 MB usable, 6419.48 MB loaded, full load: True
2025-12-08T20:56:56.528626 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
2025-12-08T20:57:11.515113 - Requested to load WanVAE
2025-12-08T20:57:11.560672 - loaded completely; 22620.71 MB usable, 242.03 MB loaded, full load: True
2025-12-08T20:57:12.300382 - Using scaled fp8: fp8 matrix mult: True, scale input: True
2025-12-08T20:57:12.330562 - model weight dtype torch.float16, manual cast: None
2025-12-08T20:57:12.330562 - model_type FLOW
2025-12-08T20:57:18.144820 - Requested to load WAN21
2025-12-08T20:57:21.906841 - loaded completely; 22037.77 MB usable, 13629.08 MB loaded, full load: True
2025-12-08T20:57:42.012192 -
100%|██████████| 10/10 [00:20<00:00, 2.03s/it]2025-12-08T20:57:42.012192 -
100%|██████████| 10/10 [00:20<00:00, 2.01s/it]2025-12-08T20:57:42.012192 -
2025-12-08T20:57:42.479600 - Using scaled fp8: fp8 matrix mult: True, scale input: True
2025-12-08T20:57:42.522285 - model weight dtype torch.float16, manual cast: None
2025-12-08T20:57:42.522285 - model_type FLOW
2025-12-08T20:57:47.690501 - Requested to load WAN21
2025-12-08T20:57:50.008726 - Unloaded partially: 1402.02 MB freed, 12229.43 MB remains loaded, 75.01 MB buffer reserved, lowvram patches: 0
2025-12-08T20:57:53.796691 - loaded completely; 16469.85 MB usable, 13629.08 MB loaded, full load: True
2025-12-08T20:58:14.009170 -
100%|██████████| 10/10 [00:20<00:00, 2.04s/it]2025-12-08T20:58:14.009170 -
100%|██████████| 10/10 [00:20<00:00, 2.02s/it]2025-12-08T20:58:14.010729 -
2025-12-08T20:58:14.184338 - Requested to load WanVAE
2025-12-08T20:58:14.258122 - loaded completely; 2137.87 MB usable, 242.03 MB loaded, full load: True
2025-12-08T20:58:15.424459 - Prompt executed in 79.68 seconds
2025-12-08T20:58:15.603551 - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\wapwr\AppData\Roaming\uv\python\cpython-3.12.9-windows-x86_64-none\Lib\asyncio\events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\wapwr\AppData\Roaming\uv\python\cpython-3.12.9-windows-x86_64-none\Lib\asyncio\proactor_events.py", line 165, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
2025-12-08T20:59:11.415751 - got prompt
2025-12-08T20:59:13.881560 - !!! Exception during processing !!! CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-12-08T20:59:13.884583 - Traceback (most recent call last):
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1569, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\nodes.py", line 1502, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1163, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1053, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 671, in load_models_gpu
free_memory(total_memory_required[device] * 1.1 + extra_mem, device)
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 603, in free_memory
if current_loaded_models[i].model_unload(memory_to_free):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 526, in model_unload
freed = self.model.partially_unload(self.model.offload_device, memory_to_free)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 913, in partially_unload
m.to(device_to)
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUIx2\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-12-08T20:59:13.886133 - Prompt executed in 2.47 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
## Additional Context
(Please add any additional context or steps to reproduce the error here)
Other
No response
Experiencing same issue
I found a solution.
I had to re-install an old version. Get things up and running and then re-upgrade to the latest.
try with --lowvram or --medvram flag
flow -- loads the model to gpu and needs more vram ,soit calls free_memory(),to unloads some part of the model
--> free_model() calls model_unload() --> model_unload() calls partially_unload() --> partially_unloads() calls m.to(device_to) to move some part to the ram while doing this we can get the oom out of memory error
Suddenly started getting this on my old workflows....on an rtxa6000 First run works fine, second OOM, and it crashes out of comfyui, the .bat closes
I think it haa to do with multi-gpu handling OR sage attention.
I disran with --cuda-device 0 and sage attention and now working properly.