CUDA error : invalid argument
Custom Node Testing
- [ ] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
Work perfectly twice or more.
Actual Behavior
I updated ComfyUI today. I had not that kind of problems on my last use (3 days ago). First run seems to be ok. Given that my GPU is not the most powerful (RTX 4070 Super), I must clean the memory (VRAM and RAM) at the end of each workflow. Despite this, since this morning, I can no longer run the same workflow multiple times when using LoRa in addition to Lightning. I get an error message. (see the log)
Steps to Reproduce
Run once my workflow with qwen-image-2509 + Lightning 8stp 1.1 + 1 LoRA. then try to run it a second time.
Debug Logs
# ComfyUI Error Report
## Error Details
- **Node ID:** 110
- **Node Type:** TextEncodeQwenImageEditPlus
- **Exception Type:** torch.AcceleratorError
- **Exception Message:** CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
## Stack Trace
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
## System Information
- **ComfyUI Version:** 0.3.71
- **Arguments:** C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\main.py --user-directory C:\ComfyUI\user --input-directory C:\ComfyUI\input --output-directory C:\ComfyUI\output --front-end-root C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\ComfyUI --extra-model-paths-config C:\Users\jerem\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000 --use-sage-attention
- **OS:** nt
- **Python Version:** 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.8.0+cu129
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 12878086144
- **VRAM Free:** 11587813376
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-11-24T15:05:17.062398 - - Clearing Cache...2025-11-24T15:05:17.062398 -
2025-11-24T15:05:17.391913 - 开始清理RAM - 当前使用率: 65.4%, 可用: 22628.0MB2025-11-24T15:05:17.391913 -
2025-11-24T15:05:20.916004 - 清理后内存使用率: 61.5%, 可用: 25165.8MB2025-11-24T15:05:20.916004 -
2025-11-24T15:05:22.039753 - 清理后内存使用率: 61.0%, 可用: 25517.5MB2025-11-24T15:05:22.039753 -
2025-11-24T15:05:23.131998 - 清理后内存使用率: 61.0%, 可用: 25488.8MB2025-11-24T15:05:23.131998 -
2025-11-24T15:05:24.239196 - 清理后内存使用率: 60.9%, 可用: 25570.3MB2025-11-24T15:05:24.239196 -
2025-11-24T15:05:25.338723 - 清理后内存使用率: 60.4%, 可用: 25901.6MB2025-11-24T15:05:25.338723 -
2025-11-24T15:05:26.418986 - 清理后内存使用率: 59.9%, 可用: 26184.5MB2025-11-24T15:05:26.418986 -
2025-11-24T15:05:26.418986 - 清理完成 - 最终内存使用率: 59.9%, 可用: 26184.5MB2025-11-24T15:05:26.418986 -
2025-11-24T15:05:26.420094 - [Delay Node] Starting delay of 15.0 seconds2025-11-24T15:05:26.420094 -
2025-11-24T15:05:41.422215 - [Delay Node] Delay of 15.0 seconds completed2025-11-24T15:05:41.422215 -
2025-11-24T15:05:41.640827 - Prompt executed in 129.58 seconds
2025-11-24T15:05:43.378163 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:05:43.381667 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:05:43.427536 - Using xformers attention in VAE
2025-11-24T15:05:43.428535 - Using xformers attention in VAE
2025-11-24T15:05:43.804224 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-24T15:05:43.877920 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-24T15:05:44.034004 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-24T15:05:45.725872 - gguf qtypes: F32 (1087), BF16 (6), Q6_K (260), Q5_K (580)2025-11-24T15:05:45.725872 -
2025-11-24T15:05:45.766730 - model weight dtype torch.bfloat16, manual cast: None
2025-11-24T15:05:45.766730 - model_type FLUX
2025-11-24T15:05:46.174523 - Requested to load WanVAE
2025-11-24T15:05:46.175146 - 0 models unloaded.
2025-11-24T15:05:46.255170 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:05:47.071664 - 0 models unloaded.
2025-11-24T15:05:47.074681 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:05:47.409784 - 0 models unloaded.
2025-11-24T15:05:47.411803 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:05:47.727514 - Requested to load QwenImageTEModel_
2025-11-24T15:05:50.734524 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:05:50.745342 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:05:50.747346 - Prompt executed in 7.44 seconds
2025-11-24T15:05:50.792514 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:05:50.795068 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:05:50.857408 - 0 models unloaded.
2025-11-24T15:05:50.859412 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:05:51.283910 - 0 models unloaded.
2025-11-24T15:05:51.285418 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:05:51.594878 - Requested to load QwenImageTEModel_
2025-11-24T15:05:54.372747 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:05:54.373751 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:05:54.374333 - Prompt executed in 3.63 seconds
2025-11-24T15:06:41.487202 - got prompt
2025-11-24T15:06:41.593706 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:06:41.597215 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:06:41.610938 - 0 models unloaded.
2025-11-24T15:06:41.613440 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:06:42.205334 - 0 models unloaded.
2025-11-24T15:06:42.207339 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:06:42.520452 - Requested to load QwenImageTEModel_
2025-11-24T15:06:45.435959 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:06:45.436959 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:06:45.438959 - Prompt executed in 3.90 seconds
2025-11-24T15:06:50.308370 - got prompt
2025-11-24T15:06:50.415061 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:06:50.419567 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:06:50.434685 - 0 models unloaded.
2025-11-24T15:06:50.436687 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:06:51.005564 - 0 models unloaded.
2025-11-24T15:06:51.007616 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:06:51.328931 - Requested to load QwenImageTEModel_
2025-11-24T15:06:54.117056 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:06:54.117056 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:06:54.119077 - Prompt executed in 3.76 seconds
2025-11-24T15:06:58.091921 - got prompt
2025-11-24T15:06:58.199895 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:06:58.204434 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:06:58.218026 - 0 models unloaded.
2025-11-24T15:06:58.220530 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:06:58.727265 - 0 models unloaded.
2025-11-24T15:06:58.729265 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:06:59.044642 - Requested to load QwenImageTEModel_
2025-11-24T15:07:01.913360 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:07:01.914385 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:07:01.915899 - Prompt executed in 3.77 seconds
2025-11-24T15:07:07.915384 - got prompt
2025-11-24T15:07:07.917892 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:07:08.373460 - 已发送内存清理信号2025-11-24T15:07:08.373964 -
2025-11-24T15:07:09.374520 - Unload Model:2025-11-24T15:07:09.374520 -
2025-11-24T15:07:09.374520 - - Unloading all models...2025-11-24T15:07:09.374520 -
2025-11-24T15:07:09.378064 - - Clearing Cache...2025-11-24T15:07:09.378064 -
2025-11-24T15:07:09.647159 - 开始清理RAM - 当前使用率: 27.2%, 可用: 47621.7MB2025-11-24T15:07:09.647159 -
2025-11-24T15:07:11.049368 - 清理后内存使用率: 26.7%, 可用: 47909.4MB2025-11-24T15:07:11.049368 -
2025-11-24T15:07:12.173307 - 清理后内存使用率: 26.2%, 可用: 48270.3MB2025-11-24T15:07:12.173307 -
2025-11-24T15:07:13.277821 - 清理后内存使用率: 25.6%, 可用: 48626.5MB2025-11-24T15:07:13.277821 -
2025-11-24T15:07:14.374366 - 清理后内存使用率: 25.1%, 可用: 48936.1MB2025-11-24T15:07:14.374366 -
2025-11-24T15:07:15.489535 - 清理后内存使用率: 24.1%, 可用: 49604.7MB2025-11-24T15:07:15.489535 -
2025-11-24T15:07:16.569740 - 清理后内存使用率: 23.9%, 可用: 49770.4MB2025-11-24T15:07:16.570978 -
2025-11-24T15:07:16.570978 - 清理完成 - 最终内存使用率: 23.9%, 可用: 49770.4MB2025-11-24T15:07:16.570978 -
2025-11-24T15:07:16.571978 - [Delay Node] Starting delay of 30.0 seconds2025-11-24T15:07:16.571978 -
2025-11-24T15:07:46.575096 - [Delay Node] Delay of 30.0 seconds completed2025-11-24T15:07:46.575096 -
2025-11-24T15:07:46.579697 - Prompt executed in 38.66 seconds
2025-11-24T15:39:39.461449 - got prompt
2025-11-24T15:39:39.575724 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:39:39.580234 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:39:39.590790 - Using xformers attention in VAE
2025-11-24T15:39:39.591790 - Using xformers attention in VAE
2025-11-24T15:39:39.739746 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-24T15:39:39.823661 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-24T15:39:40.023204 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-24T15:39:42.676666 - gguf qtypes: F32 (1087), BF16 (6), Q6_K (260), Q5_K (580)2025-11-24T15:39:42.676666 -
2025-11-24T15:39:42.719764 - model weight dtype torch.bfloat16, manual cast: None
2025-11-24T15:39:42.719764 - model_type FLUX
2025-11-24T15:39:44.163567 - Requested to load WanVAE
2025-11-24T15:39:44.164568 - 0 models unloaded.
2025-11-24T15:39:44.238391 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:39:45.112212 - 0 models unloaded.
2025-11-24T15:39:45.114231 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:39:45.447914 - 0 models unloaded.
2025-11-24T15:39:45.449914 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:39:45.747945 - Requested to load QwenImageTEModel_
2025-11-24T15:39:47.297765 - loaded completely; 9435.68 MB usable, 7909.74 MB loaded, full load: True
2025-11-24T15:39:49.769089 - 0 models unloaded.
2025-11-24T15:39:49.771091 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:39:50.094475 - 0 models unloaded.
2025-11-24T15:39:50.096500 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:39:50.396169 - Requested to load QwenImageTEModel_
2025-11-24T15:39:51.971564 - loaded completely; 9435.68 MB usable, 7909.74 MB loaded, full load: True
2025-11-24T15:39:52.767553 - Requested to load QwenImage
2025-11-24T15:39:56.860241 - loaded partially; 8029.41 MB usable, 8029.41 MB loaded, 6321.41 MB offloaded, lowvram patches: 0
2025-11-24T15:39:56.865748 - Attempting to release mmap (714)2025-11-24T15:39:56.865748 -
2025-11-24T15:41:25.845350 -
100%|██████████| 8/8 [01:23<00:00, 10.49s/it]2025-11-24T15:41:25.845350 -
2025-11-24T15:41:25.848349 - Requested to load WanVAE
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.852350 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.853790 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.855293 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:25.856296 - Tried to unpin tensor not pinned by ComfyUI
2025-11-24T15:41:28.115753 - 0 models unloaded.
2025-11-24T15:41:28.196669 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:41:29.255821 - 已发送内存清理信号2025-11-24T15:41:29.255821 -
2025-11-24T15:41:30.503970 - Unload Model:2025-11-24T15:41:30.503970 -
2025-11-24T15:41:30.503970 - - Unloading all models...2025-11-24T15:41:30.503970 -
2025-11-24T15:41:30.596022 - - Clearing Cache...2025-11-24T15:41:30.596022 -
2025-11-24T15:41:31.400319 - 开始清理RAM - 当前使用率: 69.8%, 可用: 19743.3MB2025-11-24T15:41:31.400319 -
2025-11-24T15:41:35.270489 - 清理后内存使用率: 64.2%, 可用: 23412.8MB2025-11-24T15:41:35.270489 -
2025-11-24T15:41:36.405427 - 清理后内存使用率: 63.2%, 可用: 24050.9MB2025-11-24T15:41:36.405427 -
2025-11-24T15:41:37.620289 - 清理后内存使用率: 62.6%, 可用: 24426.3MB2025-11-24T15:41:37.620289 -
2025-11-24T15:41:38.737049 - 清理后内存使用率: 62.0%, 可用: 24818.2MB2025-11-24T15:41:38.737049 -
2025-11-24T15:41:39.839153 - 清理后内存使用率: 61.8%, 可用: 25001.0MB2025-11-24T15:41:39.839153 -
2025-11-24T15:41:41.001915 - 清理后内存使用率: 61.5%, 可用: 25187.7MB2025-11-24T15:41:41.001915 -
2025-11-24T15:41:41.001915 - 清理完成 - 最终内存使用率: 61.5%, 可用: 25187.7MB2025-11-24T15:41:41.001915 -
2025-11-24T15:41:41.003431 - [Delay Node] Starting delay of 15.0 seconds2025-11-24T15:41:41.003431 -
2025-11-24T15:41:56.004806 - [Delay Node] Delay of 15.0 seconds completed2025-11-24T15:41:56.004806 -
2025-11-24T15:41:56.013315 - Prompt executed in 136.49 seconds
2025-11-24T15:42:24.778136 - got prompt
2025-11-24T15:42:24.906765 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:42:24.911271 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-24T15:42:24.954209 - Using xformers attention in VAE
2025-11-24T15:42:24.955213 - Using xformers attention in VAE
2025-11-24T15:42:25.341869 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-24T15:42:25.483856 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-24T15:42:25.627783 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-24T15:42:29.957861 - gguf qtypes: F32 (1087), BF16 (6), Q6_K (260), Q5_K (580)2025-11-24T15:42:29.957861 -
2025-11-24T15:42:30.005472 - model weight dtype torch.bfloat16, manual cast: None
2025-11-24T15:42:30.006475 - model_type FLUX
2025-11-24T15:42:30.580293 - Requested to load WanVAE
2025-11-24T15:42:30.581293 - 0 models unloaded.
2025-11-24T15:42:30.716900 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:42:31.474955 - 0 models unloaded.
2025-11-24T15:42:31.476958 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:42:31.804238 - 0 models unloaded.
2025-11-24T15:42:31.807293 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-24T15:42:32.109474 - Requested to load QwenImageTEModel_
2025-11-24T15:42:35.132978 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:42:35.138487 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 945, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 942, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 761, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-24T15:42:35.139486 - Prompt executed in 10.31 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
## Additional Context
(Please add any additional context or steps to reproduce the error here)
Other
After restarting Comfy, the workflow fine, once again.
Same issue here. First render is ok... then, next I got this error. If I close ComfyUi and restart, again, first redner will be ok, than this error will show again. Which was not there before the latest update of COmfyUi
Hi, if you are experiencing this issue can you confirm that your version of ComfyUI-GGUF is up to date and the latest?
update ComfyUI-GGUF ,still have this issue
update ComfyUI-GGUF ,still have this issue
How consistent is your reproducer and can I get a full log repaste? They can vary from person to person in critical ways.
Hello
I have the same issue.
Full log:
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/AI/ComfyUI/execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/mnt/AI/ComfyUI/execution.py", line 286, in process_inputs
result = f(**inputs)
File "/mnt/AI/ComfyUI/nodes.py", line 1559, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
File "/mnt/AI/ComfyUI/nodes.py", line 1492, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
denoise=denoise, disable_noise=disable_noise, start_step=start_step, last_step=last_step,
force_full_denoise=force_full_denoise, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/mnt/AI/ComfyUI/comfy/sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/mnt/AI/ComfyUI/comfy/samplers.py", line 1163, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/mnt/AI/ComfyUI/comfy/samplers.py", line 1053, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "/mnt/AI/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/model_patcher.py", line 944, in partially_load
self.detach()
~~~~~~~~~~~^^
File "/mnt/AI/ComfyUI/comfy/model_patcher.py", line 953, in detach
self.unpatch_model(self.offload_device, unpatch_weights=unpatch_all)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/custom_nodes/gguf/pig.py", line 64, in unpatch_model
return super().unpatch_model(device_to=device_to, unpatch_weights=unpatch_weights)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/AI/ComfyUI/comfy/model_patcher.py", line 832, in unpatch_model
self.model.to(device_to)
~~~~~~~~~~~~~^^^^^^^^^^^
File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1343, in to
return self._apply(convert)
~~~~~~~~~~~^^^^^^^^^
File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 903, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 903, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 903, in _apply
module._apply(fn)
~~~~~~~~~~~~~^^^^
File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 930, in _apply
param_applied = fn(param)
File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1329, in convert
return t.to(
~~~~^
device,
^^^^^^^
dtype if t.is_floating_point() or t.is_complex() else None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
non_blocking,
^^^^^^^^^^^^^
)
^
File "/mnt/AI/ComfyUI/custom_nodes/gguf/pig.py", line 104, in to
new = super().to(*args, **kwargs)
File "/home/alberto/.local/lib/python3.13/site-packages/torch/_tensor.py", line 1648, in torch_function
ret = func(*args, **kwargs)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
Hello
I have the same issue.
Full log:
RuntimeError: CUDA error: invalid argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with
TORCH_USE_CUDA_DSAto enable device-side assertions.During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/mnt/AI/ComfyUI/execution.py", line 510, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/execution.py", line 324, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/execution.py", line 298, in _async_map_node_over_list await process_inputs(input_dict, i) File "/mnt/AI/ComfyUI/execution.py", line 286, in process_inputs result = f(**inputs) File "/mnt/AI/ComfyUI/nodes.py", line 1559, in sample return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise) File "/mnt/AI/ComfyUI/nodes.py", line 1492, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "/mnt/AI/ComfyUI/comfy/sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "/mnt/AI/ComfyUI/comfy/samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "/mnt/AI/ComfyUI/comfy/samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) File "/mnt/AI/ComfyUI/comfy/patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/samplers.py", line 984, in outer_sample self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/sampler_helpers.py", line 130, in prepare_sampling return executor.execute(model, noise_shape, conds, model_options=model_options) ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/sampler_helpers.py", line 138, in _prepare_sampling comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/model_management.py", line 701, in load_models_gpu loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/model_management.py", line 506, in model_load self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights) ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/model_management.py", line 536, in model_use_more_vram return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights) ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/model_patcher.py", line 944, in partially_load self.detach() ~~~~~~~~~~~^^ File "/mnt/AI/ComfyUI/comfy/model_patcher.py", line 953, in detach self.unpatch_model(self.offload_device, unpatch_weights=unpatch_all) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/custom_nodes/gguf/pig.py", line 64, in unpatch_model return super().unpatch_model(device_to=device_to, unpatch_weights=unpatch_weights) ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/AI/ComfyUI/comfy/model_patcher.py", line 832, in unpatch_model self.model.to(device_to) ~~~~~~~~~~~~~^^^^^^^^^^^ File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1343, in to return self._apply(convert) ~~~~~~~~~~~^^^^^^^^^ File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 903, in _apply module._apply(fn) ~~~~~~~~~~~~~^^^^ File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 903, in _apply module._apply(fn) ~~~~~~~~~~~~~^^^^ File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 903, in _apply module._apply(fn) ~~~~~~~~~~~~~^^^^ File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 930, in _apply param_applied = fn(param) File "/home/alberto/.local/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1329, in convert return t.to( ~~~~^ device, ^^^^^^^ dtype if t.is_floating_point() or t.is_complex() else None, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ non_blocking, ^^^^^^^^^^^^^ ) ^ File "/mnt/AI/ComfyUI/custom_nodes/gguf/pig.py", line 104, in to new = super().to(*args, **kwargs) File "/home/alberto/.local/lib/python3.13/site-packages/torch/_tensor.py", line 1648, in torch_function ret = func(*args, **kwargs) RuntimeError: CUDA error: invalid argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with
TORCH_USE_CUDA_DSAto enable device-side assertions.
Hi, you are on a alternate custom node pack of GGUF that probably still doesn't have the update needed. Can you try with the latest version of the City96 ComfyUI-GGUF custom node pack as you GGUF provider?
I had this issue... and ended up re-installing ComfyUI desktop... plus re-installing all nodes for my workflow.
Now, it works fine. No errors. No first generation ok and subsequent Black.
I have this too since the last update. I will try a reinstall. Really wish this could be fixed without having to resort to that.
I had a similar issue, would work fine at first but fail after that, fixed for me by adding --disable-pinned-memory in the launch arguments. Doesn't seem like it's working 100% correctly.
I updated all my custom nodes, including gguf (comfyui-gguf:1.1.6, or gguf:2.6.8). I installed ComfyUI today's update (0.3.72) and the problemis still present. I removed the --low-vram in launch argument. ASAP, I'll try the solution of @henrikvilhelmberglund.
I'd like not to have to reinstall Comfy, once again so I keep the idea as a last attempt to make it work fine.
----- Here is my last log
ComfyUI Error Report
Error Details
- Node ID: 110
- Node Type: TextEncodeQwenImageEditPlus
- Exception Type: torch.AcceleratorError
- Exception Message: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with
TORCH_USE_CUDA_DSAto enable device-side assertions.
Stack Trace
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 944, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 941, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 760, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
System Information
- ComfyUI Version: 0.3.72
- Arguments: C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\main.py --user-directory C:\ComfyUI\user --input-directory C:\ComfyUI\input --output-directory C:\ComfyUI\output --front-end-root C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\web_custom_versions\desktop_app --base-directory C:\ComfyUI --extra-model-paths-config C:\Users\jerem\AppData\Roaming\ComfyUI\extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000 --use-sage-attention
- OS: nt
- Python Version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]
- Embedded Python: false
- PyTorch Version: 2.8.0+cu129
Devices
- Name: cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync
- Type: cuda
- VRAM Total: 12878086144
- VRAM Free: 11587813376
- Torch VRAM Total: 0
- Torch VRAM Free: 0
Logs
2025-11-26T00:29:44.180976 -
2025-11-26T00:29:44.184084 - HTTP Request: GET http://127.0.0.1:11434/api/tags "HTTP/1.1 200 OK"
2025-11-26T00:29:44.184830 - Error fetching Ollama models: 'name'2025-11-26T00:29:44.184830 -
2025-11-26T00:29:45.081760 - FETCH ComfyRegistry Data: 5/1082025-11-26T00:29:45.081760 -
2025-11-26T00:29:48.674110 - FETCH ComfyRegistry Data: 10/1082025-11-26T00:29:48.674110 -
2025-11-26T00:29:52.258290 - FETCH ComfyRegistry Data: 15/1082025-11-26T00:29:52.258290 -
2025-11-26T00:29:55.797656 - FETCH ComfyRegistry Data: 20/1082025-11-26T00:29:55.797656 -
2025-11-26T00:29:59.402441 - FETCH ComfyRegistry Data: 25/1082025-11-26T00:29:59.402441 -
2025-11-26T00:30:03.615746 - FETCH ComfyRegistry Data: 30/1082025-11-26T00:30:03.615746 -
2025-11-26T00:30:07.173500 - FETCH ComfyRegistry Data: 35/1082025-11-26T00:30:07.173500 -
2025-11-26T00:30:10.729678 - FETCH ComfyRegistry Data: 40/1082025-11-26T00:30:10.729678 -
2025-11-26T00:30:14.321933 - FETCH ComfyRegistry Data: 45/1082025-11-26T00:30:14.321933 -
2025-11-26T00:30:18.266126 - FETCH ComfyRegistry Data: 50/1082025-11-26T00:30:18.266126 -
2025-11-26T00:30:21.926163 - FETCH ComfyRegistry Data: 55/1082025-11-26T00:30:21.926163 -
2025-11-26T00:30:25.503363 - FETCH ComfyRegistry Data: 60/1082025-11-26T00:30:25.503363 -
2025-11-26T00:30:29.521440 - FETCH ComfyRegistry Data: 65/1082025-11-26T00:30:29.521440 -
2025-11-26T00:30:33.124127 - FETCH ComfyRegistry Data: 70/1082025-11-26T00:30:33.124127 -
2025-11-26T00:30:36.744867 - FETCH ComfyRegistry Data: 75/1082025-11-26T00:30:36.745868 -
2025-11-26T00:30:40.331473 - FETCH ComfyRegistry Data: 80/1082025-11-26T00:30:40.331473 -
2025-11-26T00:30:43.731881 - got prompt
2025-11-26T00:30:43.838448 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-26T00:30:43.841957 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-26T00:30:43.850600 - Using xformers attention in VAE
2025-11-26T00:30:43.851599 - Using xformers attention in VAE
2025-11-26T00:30:43.969527 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-26T00:30:43.991561 - FETCH ComfyRegistry Data: 85/1082025-11-26T00:30:43.991561 -
2025-11-26T00:30:44.053686 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-26T00:30:44.192744 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-26T00:30:45.817056 - gguf qtypes: F32 (1087), BF16 (6), Q6_K (260), Q5_K (580)2025-11-26T00:30:45.817056 -
2025-11-26T00:30:45.859606 - model weight dtype torch.bfloat16, manual cast: None
2025-11-26T00:30:45.859606 - model_type FLUX
2025-11-26T00:30:46.365392 - Requested to load WanVAE
2025-11-26T00:30:46.366393 - 0 models unloaded.
2025-11-26T00:30:46.411130 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:30:47.643458 - FETCH ComfyRegistry Data: 90/1082025-11-26T00:30:47.643458 -
2025-11-26T00:30:47.973567 - 0 models unloaded.
2025-11-26T00:30:47.974569 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:30:48.356318 - 0 models unloaded.
2025-11-26T00:30:48.358318 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:30:48.671949 - Requested to load QwenImageTEModel_
2025-11-26T00:30:50.162526 - loaded completely; 9455.80 MB usable, 7909.74 MB loaded, full load: True
2025-11-26T00:30:51.258031 - FETCH ComfyRegistry Data: 95/1082025-11-26T00:30:51.258031 -
2025-11-26T00:30:52.757967 - 0 models unloaded.
2025-11-26T00:30:52.760469 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:30:53.089384 - 0 models unloaded.
2025-11-26T00:30:53.090888 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:30:53.416907 - Requested to load QwenImageTEModel_
2025-11-26T00:30:54.855228 - FETCH ComfyRegistry Data: 100/1082025-11-26T00:30:54.855228 -
2025-11-26T00:30:54.913868 - loaded completely; 9437.68 MB usable, 7909.74 MB loaded, full load: True
2025-11-26T00:30:55.748715 - Requested to load QwenImage
2025-11-26T00:30:59.621010 - FETCH ComfyRegistry Data: 105/1082025-11-26T00:30:59.622513 -
2025-11-26T00:30:59.977429 - got prompt
2025-11-26T00:31:00.139793 - loaded partially; 8031.41 MB usable, 8031.32 MB loaded, 6319.50 MB offloaded, lowvram patches: 0
2025-11-26T00:31:00.145301 - Attempting to release mmap (607)2025-11-26T00:31:00.145301 -
2025-11-26T00:31:02.261325 - FETCH ComfyRegistry Data [DONE]2025-11-26T00:31:02.261325 -
2025-11-26T00:31:02.365082 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-11-26T00:31:02.514264 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-11-26T00:31:02.648056 - [DONE]2025-11-26T00:31:02.648056 -
2025-11-26T00:31:02.730367 - [ComfyUI-Manager] All startup tasks have been completed.
2025-11-26T00:33:44.097962 -
100%|██████████| 8/8 [02:39<00:00, 19.88s/it]2025-11-26T00:33:44.097962 -
2025-11-26T00:33:44.102983 - Requested to load WanVAE
2025-11-26T00:33:47.123867 - 0 models unloaded.
2025-11-26T00:33:47.174020 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:33:48.355794 - 已发送内存清理信号2025-11-26T00:33:48.355794 -
2025-11-26T00:33:49.556911 - Unload Model:2025-11-26T00:33:49.556911 -
2025-11-26T00:33:49.556911 - - Unloading all models...2025-11-26T00:33:49.556911 -
2025-11-26T00:33:49.741193 - - Clearing Cache...2025-11-26T00:33:49.741193 -
2025-11-26T00:33:50.058614 - 开始清理RAM - 当前使用率: 78.2%, 可用: 14261.7MB2025-11-26T00:33:50.058614 -
2025-11-26T00:33:54.215828 - 清理后内存使用率: 68.7%, 可用: 20474.5MB2025-11-26T00:33:54.216830 -
2025-11-26T00:33:55.351797 - 清理后内存使用率: 68.7%, 可用: 20484.1MB2025-11-26T00:33:55.351797 -
2025-11-26T00:33:56.470795 - 清理后内存使用率: 68.6%, 可用: 20496.9MB2025-11-26T00:33:56.470795 -
2025-11-26T00:33:57.568055 - 清理后内存使用率: 68.0%, 可用: 20932.4MB2025-11-26T00:33:57.569059 -
2025-11-26T00:33:58.656774 - 清理后内存使用率: 66.9%, 可用: 21615.0MB2025-11-26T00:33:58.657777 -
2025-11-26T00:33:59.748084 - 清理后内存使用率: 66.1%, 可用: 22140.5MB2025-11-26T00:33:59.749089 -
2025-11-26T00:33:59.749089 - 清理完成 - 最终内存使用率: 66.1%, 可用: 22140.5MB2025-11-26T00:33:59.749089 -
2025-11-26T00:33:59.756103 - [Delay Node] Starting delay of 15.0 seconds2025-11-26T00:33:59.756103 -
2025-11-26T00:34:14.757143 - [Delay Node] Delay of 15.0 seconds completed2025-11-26T00:34:14.757143 -
2025-11-26T00:34:14.765732 - Prompt executed in 210.97 seconds
2025-11-26T00:34:18.674591 - got prompt
2025-11-26T00:34:18.796484 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-26T00:34:18.799991 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-26T00:34:18.845612 - Using xformers attention in VAE
2025-11-26T00:34:18.845612 - Using xformers attention in VAE
2025-11-26T00:34:19.222834 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-26T00:34:19.522306 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-26T00:34:19.666095 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-26T00:34:28.416318 - gguf qtypes: F32 (1087), BF16 (6), Q6_K (260), Q5_K (580)2025-11-26T00:34:28.416318 -
2025-11-26T00:34:28.459898 - model weight dtype torch.bfloat16, manual cast: None
2025-11-26T00:34:28.460898 - model_type FLUX
2025-11-26T00:34:43.289554 - Requested to load WanVAE
2025-11-26T00:34:43.292057 - 0 models unloaded.
2025-11-26T00:34:43.359239 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:34:44.065857 - 0 models unloaded.
2025-11-26T00:34:44.069866 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:34:44.409597 - 0 models unloaded.
2025-11-26T00:34:44.413600 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:34:44.746679 - Requested to load QwenImageTEModel_
2025-11-26T00:34:47.304768 - loaded completely; 9435.68 MB usable, 7909.74 MB loaded, full load: True
2025-11-26T00:34:50.038065 - 0 models unloaded.
2025-11-26T00:34:50.039065 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:34:50.394751 - 0 models unloaded.
2025-11-26T00:34:50.399255 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:34:50.695994 - Requested to load QwenImageTEModel_
2025-11-26T00:34:53.009032 - loaded completely; 9435.68 MB usable, 7909.74 MB loaded, full load: True
2025-11-26T00:34:53.890329 - Requested to load QwenImage
2025-11-26T00:35:03.884976 - loaded partially; 8029.41 MB usable, 8029.41 MB loaded, 6321.41 MB offloaded, lowvram patches: 0
2025-11-26T00:35:03.889975 - Attempting to release mmap (714)2025-11-26T00:35:03.889975 -
2025-11-26T00:37:13.489001 -
100%|██████████| 8/8 [01:53<00:00, 14.22s/it]2025-11-26T00:37:13.489001 -
2025-11-26T00:37:13.491002 - Requested to load WanVAE
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.495508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.496511 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.497510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.498508 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.499510 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.500509 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.500509 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.500509 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:13.500509 - Tried to unpin tensor not pinned by ComfyUI
2025-11-26T00:37:16.063796 - 0 models unloaded.
2025-11-26T00:37:16.113682 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:37:17.095460 - 已发送内存清理信号2025-11-26T00:37:17.095460 -
2025-11-26T00:37:18.331085 - Unload Model:2025-11-26T00:37:18.331085 -
2025-11-26T00:37:18.331085 - - Unloading all models...2025-11-26T00:37:18.331085 -
2025-11-26T00:37:18.424512 - - Clearing Cache...2025-11-26T00:37:18.424512 -
2025-11-26T00:37:18.735196 - 开始清理RAM - 当前使用率: 73.0%, 可用: 17628.8MB2025-11-26T00:37:18.735196 -
2025-11-26T00:37:22.214218 - 清理后内存使用率: 67.8%, 可用: 21081.3MB2025-11-26T00:37:22.214218 -
2025-11-26T00:37:23.339721 - 清理后内存使用率: 67.3%, 可用: 21366.7MB2025-11-26T00:37:23.339721 -
2025-11-26T00:37:24.441971 - 清理后内存使用率: 66.5%, 可用: 21903.7MB2025-11-26T00:37:24.441971 -
2025-11-26T00:37:25.530606 - 清理后内存使用率: 65.4%, 可用: 22615.9MB2025-11-26T00:37:25.530606 -
2025-11-26T00:37:26.612161 - 清理后内存使用率: 64.8%, 可用: 23013.2MB2025-11-26T00:37:26.612161 -
2025-11-26T00:37:27.731762 - 清理后内存使用率: 64.8%, 可用: 22982.3MB2025-11-26T00:37:27.731762 -
2025-11-26T00:37:27.732829 - 清理完成 - 最终内存使用率: 64.8%, 可用: 22982.3MB2025-11-26T00:37:27.732829 -
2025-11-26T00:37:27.740851 - [Delay Node] Starting delay of 15.0 seconds2025-11-26T00:37:27.740851 -
2025-11-26T00:37:41.567007 - got prompt
2025-11-26T00:37:42.742319 - [Delay Node] Delay of 15.0 seconds completed2025-11-26T00:37:42.742319 -
2025-11-26T00:37:42.746393 - Prompt executed in 204.01 seconds
2025-11-26T00:37:44.624648 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-26T00:37:44.627677 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self'
2025-11-26T00:37:44.673019 - Using xformers attention in VAE
2025-11-26T00:37:44.673019 - Using xformers attention in VAE
2025-11-26T00:37:45.048612 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-11-26T00:37:45.213470 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-11-26T00:37:45.373248 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-26T00:37:51.941921 - gguf qtypes: F32 (1087), BF16 (6), Q6_K (260), Q5_K (580)2025-11-26T00:37:51.942921 -
2025-11-26T00:37:51.987489 - model weight dtype torch.bfloat16, manual cast: None
2025-11-26T00:37:51.988488 - model_type FLUX
2025-11-26T00:37:52.616713 - Requested to load WanVAE
2025-11-26T00:37:52.618227 - 0 models unloaded.
2025-11-26T00:37:52.660427 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:37:53.396039 - 0 models unloaded.
2025-11-26T00:37:53.398054 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:37:53.738463 - 0 models unloaded.
2025-11-26T00:37:53.739463 - loaded partially; 128.00 MB usable, 128.00 MB loaded, 114.00 MB offloaded, lowvram patches: 0
2025-11-26T00:37:54.042903 - Requested to load QwenImageTEModel_
2025-11-26T00:37:57.279142 - !!! Exception during processing !!! CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-26T00:37:57.288690 - Traceback (most recent call last):
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 298, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy_extras\nodes_qwen.py", line 101, in execute
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 177, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 238, in encode_from_tokens
self.load_model()
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\sd.py", line 271, in load_model
model_management.load_model_gpu(self.patcher)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 706, in load_model_gpu
return load_models_gpu([model])
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 701, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 506, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_management.py", line 536, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 944, in partially_load
raise e
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 941, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "C:\Users\jerem\AppData\Local\Programs\ComfyUI\resources\ComfyUI\comfy\model_patcher.py", line 760, in load
x[2].to(device_to)
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
param_applied = fn(param)
^^^^^^^^^
File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
return t.to(
^^^^^
torch.AcceleratorError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-11-26T00:37:57.290690 - Prompt executed in 12.73 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
Additional Context
(Please add any additional context or steps to reproduce the error here) `
I updated all my custom nodes, including gguf (comfyui-gguf:1.1.6, or gguf:2.6.8).
Which one is in play for the below error? comfyui-gguf 1.1.6 has a critical fix but gguf-2.6.8 may be missing the same fix.
I installed ComfyUI today's update (0.3.72) and the problemis still present. I removed the --low-vram in launch argument. ASAP, I'll try the solution of @henrikvilhelmberglund.
I'd like not to have to reinstall Comfy, once again so I keep the idea as a last attempt to make it work fine.
I don't think clean reinstall will help you.
----- Here is my last log
...
@rattus128 It's "GGUF Loader" from gguf.
Here is the json version of the node in the workflow, if it can help to understand
{
"id": 115,
"type": "LoaderGGUF",
"pos": [
-260,
-80
],
"size": [
340,
60
],
"flags": {},
"order": 9,
"mode": 0,
"inputs": [
{
"localized_name": "gguf_name",
"name": "gguf_name",
"type": "COMBO",
"widget": {
"name": "gguf_name"
},
"link": null
}
],
"outputs": [
{
"localized_name": "MODEL",
"name": "MODEL",
"type": "MODEL",
"links": [
226,
242
]
}
],
"properties": {
"cnr_id": "gguf",
"ver": "bea4bb725c3d73843aa8fa3f195dc59e220112c9",
"Node name for S&R": "LoaderGGUF",
"ue_properties": {
"widget_ue_connectable": {
"gguf_name": true
},
"version": "7.1",
"input_ue_unconnectable": {}
}
},
"widgets_values": [
"Qwen\\style\\Qwen-Image-Edit-2509-Q5_K_M.gguf"
]
}
@rattus128 It's "GGUF Loader" from gguf.
Here is the json version of the node in the workflow, if it can help to understand
{ "id": 115, "type": "LoaderGGUF", "pos": [ -260, -80 ], "size": [ 340, 60 ], "flags": {}, "order": 9, "mode": 0, "inputs": [ { "localized_name": "gguf_name", "name": "gguf_name", "type": "COMBO", "widget": { "name": "gguf_name" }, "link": null } ], "outputs": [ { "localized_name": "MODEL", "name": "MODEL", "type": "MODEL", "links": [ 226, 242 ] } ], "properties": { "cnr_id": "gguf", "ver": "bea4bb725c3d73843aa8fa3f195dc59e220112c9", "Node name for S&R": "LoaderGGUF", "ue_properties": { "widget_ue_connectable": { "gguf_name": true }, "version": "7.1", "input_ue_unconnectable": {} } }, "widgets_values": [ "Qwen\style\Qwen-Image-Edit-2509-Q5_K_M.gguf" ] }
Can you try again with comfyui-gguf which provides the "UnetLoaderGGUF" node instead? LMK the results
WOW. Using "Unet Loader (GGUF)" seems to make things better. I generated 5-6 images with Qwen, changing model quantization, LoRAs and no error thrown. Good try.
--disable-pinned-memory can fix the issue , What's the pinned-memory fuction ?
--disable-pinned-memory can fix the issue , What's the pinned-memory fuction ?
Pinned memory is a major performance boost to partial model load flows but currently we don't support pinning with GGUF due to some implementation challenges.
You may actually find that due to recent changes, things run faster for you without GGUF models, even when you have less VRAM than the model size. If you have a FP8 version of your model give it a go and see what the performance is like.
GGUF still has a clear advantage when it comes to RAM usage but if you are only worried about VRAM usage try dropping GGUF. When doing high resolution on some of the larger models you may actually run faster even with the bigger model file.
I thought the problem was gone when changing the GGUF Loader. My bad. There is still a problem. After being stuck in ksampler with WAN2.2, I am stuck in ksampler with qwen, but no error thrown. I'll look in opened issues or create a new one.
(By Gemini ^^)
Hi everyone,
I was facing the exact same issue ("CUDA error: invalid argument" during VAEDecode) after the ComfyUI update.
My setup is an RTX 5090.
I confirmed the problem stemmed from the installation of PyTorch 2.8.0+cu129, which is currently unstable on recent architectures.
✅ Solution that worked: Force installation of a stable Nightly build (CUDA 12.8)
I replaced the unstable version with a Nightly build compiled for CUDA 12.8, which resolved the error while maintaining 5090 support.
-
Uninstall the current version:
.\python.exe -m pip uninstall torch torchvision torchaudio -y(To be executed inside the
python_embededfolder) -
Install the stable Nightly build (cu128):
.\python.exe -m pip install --pre torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/nightly/cu128](https://download.pytorch.org/whl/nightly/cu128)
After performing these steps, the error disappeared. Hope this helps users with 4000/5000 series cards!
I had a similar issue, would work fine at first but fail after that, fixed for me by adding
--disable-pinned-memoryin the launch arguments. Doesn't seem like it's working 100% correctly.
THANK YOU! ran cmd from comfyui folder .
.\python_embeded\python.exe -I ComfyUI\main.py --windows-standalone-build --use-sage-attention --disable-pinned-memory
Ran with no issues, no black output after 1st gen. Also anybody know a permanent fix besides editing .bat file?
My venv is an RTX 5090. PyTorch 2.8.0+cu129
I also often encounter the error "Tried to unpin tensor not pinned by ComfyUI". It still happens even after restarting. When I click "Run", it takes about 10 seconds for ComfyUI to respond.
I run comfyui-windows desktop app
(By Gemini ^^)
Hi everyone,
I was facing the exact same issue ("CUDA error: invalid argument" during VAEDecode) after the ComfyUI update.
My setup is an RTX 5090.
I confirmed the problem stemmed from the installation of PyTorch 2.8.0+cu129, which is currently unstable on recent architectures.
✅ Solution that worked: Force installation of a stable Nightly build (CUDA 12.8)
I replaced the unstable version with a Nightly build compiled for CUDA 12.8, which resolved the error while maintaining 5090 support.
1. **Uninstall the current version:** .\python.exe -m pip uninstall torch torchvision torchaudio -y _(To be executed inside the `python_embeded` folder)_ 2. **Install the stable Nightly build (cu128):** .\python.exe -m pip install --pre torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/nightly/cu128](https://download.pytorch.org/whl/nightly/cu128)After performing these steps, the error disappeared. Hope this helps users with 4000/5000 series cards!
Well, I did not go that far but it works now. I just changed the gguf loader on all the workflows I am using. Qwen and WAN now working good. I saw on another issue that CUDNN would be useful, so I installed it to, to see if I have better performances.
gguf loader
(By Gemini ^^) Hi everyone, I was facing the exact same issue ("CUDA error: invalid argument" during VAEDecode) after the ComfyUI update. My setup is an RTX 5090. I confirmed the problem stemmed from the installation of PyTorch 2.8.0+cu129, which is currently unstable on recent architectures.
✅ Solution that worked: Force installation of a stable Nightly build (CUDA 12.8)
I replaced the unstable version with a Nightly build compiled for CUDA 12.8, which resolved the error while maintaining 5090 support.
1. **Uninstall the current version:** .\python.exe -m pip uninstall torch torchvision torchaudio -y _(To be executed inside the `python_embeded` folder)_ 2. **Install the stable Nightly build (cu128):** .\python.exe -m pip install --pre torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/nightly/cu128](https://download.pytorch.org/whl/nightly/cu128)After performing these steps, the error disappeared. Hope this helps users with 4000/5000 series cards!
Well, I did not go that far but it works now. I just changed the gguf loader on all the workflows I am using. Qwen and WAN now working good. I saw on another issue that CUDNN would be useful, so I installed it to, to see if I have better performances.
May I ask which GGUF loader is being used?
I want to know which GGUF loader is being used too ?
@huanghetv & @armynew I use now "Unet Loader (GGUF)" from "comfyui-gguf" (you can find it in the custom nodes by the manager), I have seen an update yesterday for "gguf" custom node, but I don't know if it fixes the problem.
Wanted to chime in and say i too started getting this error with the latest comfy. It first manifested after i started using Wan 2.2 (after testing the new Z-Image, hence the update), and noticed the low noise pass was outputting a black output. Then when trying to re-run the workflow without restarting comfy, would get the invalid arg error, also noticed the invalid arg error would occur at different points in the workflow at times. Only fix to avoid error was restart comfy, but that did not fix the Wan low noise black output (high noise pass outputted fine as usual on first run). Updated all custom nodes etc, no fix.
Only fix that worked was above users suggestion to apply --disable-pinned-memory at startup .bat.
Also should mention the newest comfy update broke the multigpu custom node i was using to load the GGUF and use a distorch feature. I had to manually patch the distorch2.py in that with a user fix that supported an apparent change in argument counts on comfy's end. I was getting different errors before i made that change (with all kinds of models, not just Wan 2.2). Once that was fixed was when i noticed the pinned memory issues with wan on the model change from high to low passes.
Wanted to chime in and say i too started getting this error with the latest comfy. It first manifested after i started using Wan 2.2 (after testing the new Z-Image, hence the update), and noticed the low noise pass was outputting a black output. Then when trying to re-run the workflow without restarting comfy, would get the invalid arg error, also noticed the invalid arg error would occur at different points in the workflow at times. Only fix to avoid error was restart comfy, but that did not fix the Wan low noise black output (high noise pass outputted fine as usual on first run). Updated all custom nodes etc, no fix.
Only fix that worked was above users suggestion to apply --disable-pinned-memory at startup .bat.
Also should mention the newest comfy update broke the multigpu custom node i was using to load the GGUF and use a distorch feature. I had to manually patch the distorch2.py in that with a user fix that supported an apparent change in argument counts on comfy's end. I was getting different errors before i made that change (with all kinds of models, not just Wan 2.2). Once that was fixed was when i noticed the pinned memory issues with wan on the model change from high to low passes.
Can we get a log and workflow for the black output into invalid arg case? What is your hardware?
RTX3070