mat1 and mat2 shapes cannot be multiplied (7920x1280 and 3840x1280) on qwen image edit.
Custom Node Testing
- [x] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
Something is wrong, I wanted to test something but it shows an error
Actual Behavior
Steps to Reproduce
png.
Debug Logs
# ComfyUI Error Report
## Error Details
- **Node ID:** 1
- **Node Type:** TextEncodeQwenImageEdit
- **Exception Type:** RuntimeError
- **Exception Message:** mat1 and mat2 shapes cannot be multiplied (7920x1280 and 3840x1280)
## Stack Trace
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_qwen.py", line 55, in encode
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 170, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_image.py", line 51, in encode_token_weights
out, pooled, extra = super().encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 686, in encode_token_weights
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 291, in encode
return self(tokens)
^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 253, in forward
embeds, attention_mask, num_tokens, embeds_info = self.process_tokens(tokens, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 219, in process_tokens
emb, extra = self.transformer.preprocess_embed(emb, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 384, in preprocess_embed
return self.visual(image.to(device, dtype=torch.float32), grid), grid
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 425, in forward
hidden_states = block(hidden_states, position_embeddings, cu_seqlens_now, optimized_attention=optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 252, in forward
hidden_states = self.attn(hidden_states, position_embeddings, cu_seqlens, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 195, in forward
qkv = self.qkv(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 110, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 217, in forward_comfy_cast_weights
out = super().forward_comfy_cast_weights(input, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward_comfy_cast_weights
return torch.nn.functional.linear(input, weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
## System Information
- **ComfyUI Version:** 0.3.50
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.7.1+cu128
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4060 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 8585216000
- **VRAM Free:** 7441743872
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-08-19T10:56:42.715913 - [START] Security scan2025-08-19T10:56:42.715913 -
2025-08-19T10:56:43.906038 - [DONE] Security scan2025-08-19T10:56:43.906038 -
2025-08-19T10:56:44.148931 - ## ComfyUI-Manager: installing dependencies done.2025-08-19T10:56:44.148931 -
2025-08-19T10:56:44.148931 - ** ComfyUI startup time:2025-08-19T10:56:44.148931 - 2025-08-19T10:56:44.148931 - 2025-08-19 10:56:44.1482025-08-19T10:56:44.148931 -
2025-08-19T10:56:44.148931 - ** Platform:2025-08-19T10:56:44.148931 - 2025-08-19T10:56:44.148931 - Windows2025-08-19T10:56:44.148931 -
2025-08-19T10:56:44.148931 - ** Python version:2025-08-19T10:56:44.148931 - 2025-08-19T10:56:44.148931 - 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]2025-08-19T10:56:44.148931 -
2025-08-19T10:56:44.149934 - ** Python executable:2025-08-19T10:56:44.149934 - 2025-08-19T10:56:44.149934 - C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Scripts\python.exe2025-08-19T10:56:44.149934 -
2025-08-19T10:56:44.149934 - ** ComfyUI Path:2025-08-19T10:56:44.149934 - 2025-08-19T10:56:44.149934 - D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI2025-08-19T10:56:44.149934 -
2025-08-19T10:56:44.149934 - ** ComfyUI Base Folder Path:2025-08-19T10:56:44.150931 - 2025-08-19T10:56:44.150931 - D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI2025-08-19T10:56:44.150931 -
2025-08-19T10:56:44.150931 - ** User directory:2025-08-19T10:56:44.150931 - 2025-08-19T10:56:44.162045 - D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\user2025-08-19T10:56:44.162045 -
2025-08-19T10:56:44.162045 - ** ComfyUI-Manager config path:2025-08-19T10:56:44.162045 - 2025-08-19T10:56:44.162045 - D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-08-19T10:56:44.162045 -
2025-08-19T10:56:44.162045 - ** Log path:2025-08-19T10:56:44.162045 - 2025-08-19T10:56:44.162045 - D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-08-19T10:56:44.162045 -
2025-08-19T10:56:45.613464 -
Prestartup times for custom nodes:
2025-08-19T10:56:45.613464 - 0.0 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2025-08-19T10:56:45.615465 - 3.5 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2025-08-19T10:56:45.616471 -
2025-08-19T10:56:48.765297 - Checkpoint files will always be loaded safely.
2025-08-19T10:56:49.169225 - Total VRAM 8188 MB, total RAM 16131 MB
2025-08-19T10:56:49.169225 - pytorch version: 2.7.1+cu128
2025-08-19T10:56:49.170222 - Set vram state to: NORMAL_VRAM
2025-08-19T10:56:49.170222 - Device: cuda:0 NVIDIA GeForce RTX 4060 : cudaMallocAsync
2025-08-19T10:56:52.553850 - Using pytorch attention
2025-08-19T10:56:57.423422 - Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]
2025-08-19T10:56:57.423929 - ComfyUI version: 0.3.50
2025-08-19T10:56:57.496039 - ComfyUI frontend version: 1.25.9
2025-08-19T10:56:57.498554 - [Prompt Server] web root: C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\comfyui_frontend_package\static
2025-08-19T10:57:04.098182 - ComfyUI-GGUF: Partial torch compile only, consider updating pytorch
2025-08-19T10:57:04.209529 - ### Loading: ComfyUI-Manager (V3.36)
2025-08-19T10:57:04.210543 - [ComfyUI-Manager] network_mode: public
2025-08-19T10:57:04.369705 - ### ComfyUI Version: v0.3.50-27-g4977f203 | Released on '2025-08-18'
2025-08-19T10:57:04.979610 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-08-19T10:57:05.008333 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-08-19T10:57:05.201950 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-08-19T10:57:05.339706 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-08-19T10:57:05.678628 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-08-19T10:57:07.824593 - FantasyPortrait nodes not available due to error in importing them: No module named 'onnx'2025-08-19T10:57:07.825682 -
2025-08-19T10:57:07.848335 - [36;20m[D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-08-19T10:57:07.848335 - [36;20m[D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-08-19T10:57:07.850459 - [36;20m[D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-08-19T10:57:08.351335 - DWPose: Onnxruntime with acceleration providers detected2025-08-19T10:57:08.351335 -
2025-08-19T10:57:08.537023 -
2025-08-19T10:57:08.537023 - [92m[rgthree-comfy] Loaded 48 magnificent nodes. 🎉[0m2025-08-19T10:57:08.537023 -
2025-08-19T10:57:08.537023 -
2025-08-19T10:57:08.542533 - ======================================== Stand-In ========================================
2025-08-19T10:57:08.706142 - Successfully loaded all Stand-In nodes.
2025-08-19T10:57:08.706142 - ==========================================================================================
2025-08-19T10:57:08.708174 -
Import times for custom nodes:
2025-08-19T10:57:08.708174 - 0.0 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-08-19T10:57:08.708174 - 0.0 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_SigmoidOffsetScheduler
2025-08-19T10:57:08.709212 - 0.0 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagCache
2025-08-19T10:57:08.709212 - 0.0 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials
2025-08-19T10:57:08.709212 - 0.0 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
2025-08-19T10:57:08.709212 - 0.1 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes
2025-08-19T10:57:08.709212 - 0.1 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2025-08-19T10:57:08.709212 - 0.2 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2025-08-19T10:57:08.709212 - 0.2 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\Stand-In_Preprocessor_ComfyUI
2025-08-19T10:57:08.710134 - 0.6 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-08-19T10:57:08.710134 - 0.7 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2025-08-19T10:57:08.710638 - 2.8 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper
2025-08-19T10:57:08.710638 - 4.2 seconds: D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper
2025-08-19T10:57:08.710638 -
2025-08-19T10:57:09.292709 - Context impl SQLiteImpl.
2025-08-19T10:57:09.292709 - Will assume non-transactional DDL.
2025-08-19T10:57:09.293910 - No target revision found.
2025-08-19T10:57:09.341801 - Starting server
2025-08-19T10:57:09.343071 - To see the GUI go to: http://127.0.0.1:8188
2025-08-19T10:57:10.690024 - FETCH ComfyRegistry Data: 5/942025-08-19T10:57:10.690024 -
2025-08-19T10:57:14.545683 - FETCH ComfyRegistry Data: 10/942025-08-19T10:57:14.545683 -
2025-08-19T10:57:18.461604 - FETCH ComfyRegistry Data: 15/942025-08-19T10:57:18.461604 -
2025-08-19T10:57:22.450778 - FETCH ComfyRegistry Data: 20/942025-08-19T10:57:22.451731 -
2025-08-19T10:57:26.373000 - FETCH ComfyRegistry Data: 25/942025-08-19T10:57:26.373000 -
2025-08-19T10:57:30.356361 - FETCH ComfyRegistry Data: 30/942025-08-19T10:57:30.356361 -
2025-08-19T10:57:34.508382 - FETCH ComfyRegistry Data: 35/942025-08-19T10:57:34.509394 -
2025-08-19T10:57:38.376705 - FETCH ComfyRegistry Data: 40/942025-08-19T10:57:38.376705 -
2025-08-19T10:57:42.120644 - FETCH ComfyRegistry Data: 45/942025-08-19T10:57:42.120644 -
2025-08-19T10:57:46.038956 - FETCH ComfyRegistry Data: 50/942025-08-19T10:57:46.038956 -
2025-08-19T10:57:49.962341 - FETCH ComfyRegistry Data: 55/942025-08-19T10:57:49.962341 -
2025-08-19T10:57:53.952915 - FETCH ComfyRegistry Data: 60/942025-08-19T10:57:53.952915 -
2025-08-19T10:57:58.009264 - FETCH ComfyRegistry Data: 65/942025-08-19T10:57:58.009264 -
2025-08-19T10:58:03.012717 - FETCH ComfyRegistry Data: 70/942025-08-19T10:58:03.012717 -
2025-08-19T10:58:09.255874 - FETCH ComfyRegistry Data: 75/942025-08-19T10:58:09.256874 -
2025-08-19T10:58:13.625591 - FETCH ComfyRegistry Data: 80/942025-08-19T10:58:13.626591 -
2025-08-19T10:58:17.588208 - FETCH ComfyRegistry Data: 85/942025-08-19T10:58:17.588208 -
2025-08-19T10:58:21.547281 - FETCH ComfyRegistry Data: 90/942025-08-19T10:58:21.547281 -
2025-08-19T10:58:25.327570 - FETCH ComfyRegistry Data [DONE]2025-08-19T10:58:25.327570 -
2025-08-19T10:58:25.586569 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-08-19T10:58:25.916283 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-08-19T10:58:25.916283 - 2025-08-19T10:58:26.031867 - [DONE]2025-08-19T10:58:26.032875 -
2025-08-19T10:58:26.198137 - [ComfyUI-Manager] All startup tasks have been completed.
2025-08-19T11:01:44.360743 - got prompt
2025-08-19T11:01:45.354166 - Using pytorch attention in VAE
2025-08-19T11:01:45.358166 - Using pytorch attention in VAE
2025-08-19T11:01:54.521937 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-08-19T11:02:12.044145 - gguf qtypes: Q6_K (29), F32 (141), Q4_K (169)
2025-08-19T11:02:12.239597 - Dequantizing token_embd.weight to prevent runtime OOM.
2025-08-19T11:02:23.560627 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-08-19T11:02:25.097062 - clip missing: ['visual.patch_embed.proj.weight', 'visual.blocks.0.norm1.weight', 'visual.blocks.0.norm2.weight', 'visual.blocks.0.attn.qkv.weight', 'visual.blocks.0.attn.proj.weight', 'visual.blocks.0.mlp.gate_proj.weight', 'visual.blocks.0.mlp.up_proj.weight', 'visual.blocks.0.mlp.down_proj.weight', 'visual.blocks.1.norm1.weight', 'visual.blocks.1.norm2.weight', 'visual.blocks.1.attn.qkv.weight', 'visual.blocks.1.attn.proj.weight', 'visual.blocks.1.mlp.gate_proj.weight', 'visual.blocks.1.mlp.up_proj.weight', 'visual.blocks.1.mlp.down_proj.weight', 'visual.blocks.2.norm1.weight', 'visual.blocks.2.norm2.weight', 'visual.blocks.2.attn.qkv.weight', 'visual.blocks.2.attn.proj.weight', 'visual.blocks.2.mlp.gate_proj.weight', 'visual.blocks.2.mlp.up_proj.weight', 'visual.blocks.2.mlp.down_proj.weight', 'visual.blocks.3.norm1.weight', 'visual.blocks.3.norm2.weight', 'visual.blocks.3.attn.qkv.weight', 'visual.blocks.3.attn.proj.weight', 'visual.blocks.3.mlp.gate_proj.weight', 'visual.blocks.3.mlp.up_proj.weight', 'visual.blocks.3.mlp.down_proj.weight', 'visual.blocks.4.norm1.weight', 'visual.blocks.4.norm2.weight', 'visual.blocks.4.attn.qkv.weight', 'visual.blocks.4.attn.proj.weight', 'visual.blocks.4.mlp.gate_proj.weight', 'visual.blocks.4.mlp.up_proj.weight', 'visual.blocks.4.mlp.down_proj.weight', 'visual.blocks.5.norm1.weight', 'visual.blocks.5.norm2.weight', 'visual.blocks.5.attn.qkv.weight', 'visual.blocks.5.attn.proj.weight', 'visual.blocks.5.mlp.gate_proj.weight', 'visual.blocks.5.mlp.up_proj.weight', 'visual.blocks.5.mlp.down_proj.weight', 'visual.blocks.6.norm1.weight', 'visual.blocks.6.norm2.weight', 'visual.blocks.6.attn.qkv.weight', 'visual.blocks.6.attn.proj.weight', 'visual.blocks.6.mlp.gate_proj.weight', 'visual.blocks.6.mlp.up_proj.weight', 'visual.blocks.6.mlp.down_proj.weight', 'visual.blocks.7.norm1.weight', 'visual.blocks.7.norm2.weight', 'visual.blocks.7.attn.qkv.weight', 'visual.blocks.7.attn.proj.weight', 'visual.blocks.7.mlp.gate_proj.weight', 'visual.blocks.7.mlp.up_proj.weight', 'visual.blocks.7.mlp.down_proj.weight', 'visual.blocks.8.norm1.weight', 'visual.blocks.8.norm2.weight', 'visual.blocks.8.attn.qkv.weight', 'visual.blocks.8.attn.proj.weight', 'visual.blocks.8.mlp.gate_proj.weight', 'visual.blocks.8.mlp.up_proj.weight', 'visual.blocks.8.mlp.down_proj.weight', 'visual.blocks.9.norm1.weight', 'visual.blocks.9.norm2.weight', 'visual.blocks.9.attn.qkv.weight', 'visual.blocks.9.attn.proj.weight', 'visual.blocks.9.mlp.gate_proj.weight', 'visual.blocks.9.mlp.up_proj.weight', 'visual.blocks.9.mlp.down_proj.weight', 'visual.blocks.10.norm1.weight', 'visual.blocks.10.norm2.weight', 'visual.blocks.10.attn.qkv.weight', 'visual.blocks.10.attn.proj.weight', 'visual.blocks.10.mlp.gate_proj.weight', 'visual.blocks.10.mlp.up_proj.weight', 'visual.blocks.10.mlp.down_proj.weight', 'visual.blocks.11.norm1.weight', 'visual.blocks.11.norm2.weight', 'visual.blocks.11.attn.qkv.weight', 'visual.blocks.11.attn.proj.weight', 'visual.blocks.11.mlp.gate_proj.weight', 'visual.blocks.11.mlp.up_proj.weight', 'visual.blocks.11.mlp.down_proj.weight', 'visual.blocks.12.norm1.weight', 'visual.blocks.12.norm2.weight', 'visual.blocks.12.attn.qkv.weight', 'visual.blocks.12.attn.proj.weight', 'visual.blocks.12.mlp.gate_proj.weight', 'visual.blocks.12.mlp.up_proj.weight', 'visual.blocks.12.mlp.down_proj.weight', 'visual.blocks.13.norm1.weight', 'visual.blocks.13.norm2.weight', 'visual.blocks.13.attn.qkv.weight', 'visual.blocks.13.attn.proj.weight', 'visual.blocks.13.mlp.gate_proj.weight', 'visual.blocks.13.mlp.up_proj.weight', 'visual.blocks.13.mlp.down_proj.weight', 'visual.blocks.14.norm1.weight', 'visual.blocks.14.norm2.weight', 'visual.blocks.14.attn.qkv.weight', 'visual.blocks.14.attn.proj.weight', 'visual.blocks.14.mlp.gate_proj.weight', 'visual.blocks.14.mlp.up_proj.weight', 'visual.blocks.14.mlp.down_proj.weight', 'visual.blocks.15.norm1.weight', 'visual.blocks.15.norm2.weight', 'visual.blocks.15.attn.qkv.weight', 'visual.blocks.15.attn.proj.weight', 'visual.blocks.15.mlp.gate_proj.weight', 'visual.blocks.15.mlp.up_proj.weight', 'visual.blocks.15.mlp.down_proj.weight', 'visual.blocks.16.norm1.weight', 'visual.blocks.16.norm2.weight', 'visual.blocks.16.attn.qkv.weight', 'visual.blocks.16.attn.proj.weight', 'visual.blocks.16.mlp.gate_proj.weight', 'visual.blocks.16.mlp.up_proj.weight', 'visual.blocks.16.mlp.down_proj.weight', 'visual.blocks.17.norm1.weight', 'visual.blocks.17.norm2.weight', 'visual.blocks.17.attn.qkv.weight', 'visual.blocks.17.attn.proj.weight', 'visual.blocks.17.mlp.gate_proj.weight', 'visual.blocks.17.mlp.up_proj.weight', 'visual.blocks.17.mlp.down_proj.weight', 'visual.blocks.18.norm1.weight', 'visual.blocks.18.norm2.weight', 'visual.blocks.18.attn.qkv.weight', 'visual.blocks.18.attn.proj.weight', 'visual.blocks.18.mlp.gate_proj.weight', 'visual.blocks.18.mlp.up_proj.weight', 'visual.blocks.18.mlp.down_proj.weight', 'visual.blocks.19.norm1.weight', 'visual.blocks.19.norm2.weight', 'visual.blocks.19.attn.qkv.weight', 'visual.blocks.19.attn.proj.weight', 'visual.blocks.19.mlp.gate_proj.weight', 'visual.blocks.19.mlp.up_proj.weight', 'visual.blocks.19.mlp.down_proj.weight', 'visual.blocks.20.norm1.weight', 'visual.blocks.20.norm2.weight', 'visual.blocks.20.attn.qkv.weight', 'visual.blocks.20.attn.proj.weight', 'visual.blocks.20.mlp.gate_proj.weight', 'visual.blocks.20.mlp.up_proj.weight', 'visual.blocks.20.mlp.down_proj.weight', 'visual.blocks.21.norm1.weight', 'visual.blocks.21.norm2.weight', 'visual.blocks.21.attn.qkv.weight', 'visual.blocks.21.attn.proj.weight', 'visual.blocks.21.mlp.gate_proj.weight', 'visual.blocks.21.mlp.up_proj.weight', 'visual.blocks.21.mlp.down_proj.weight', 'visual.blocks.22.norm1.weight', 'visual.blocks.22.norm2.weight', 'visual.blocks.22.attn.qkv.weight', 'visual.blocks.22.attn.proj.weight', 'visual.blocks.22.mlp.gate_proj.weight', 'visual.blocks.22.mlp.up_proj.weight', 'visual.blocks.22.mlp.down_proj.weight', 'visual.blocks.23.norm1.weight', 'visual.blocks.23.norm2.weight', 'visual.blocks.23.attn.qkv.weight', 'visual.blocks.23.attn.proj.weight', 'visual.blocks.23.mlp.gate_proj.weight', 'visual.blocks.23.mlp.up_proj.weight', 'visual.blocks.23.mlp.down_proj.weight', 'visual.blocks.24.norm1.weight', 'visual.blocks.24.norm2.weight', 'visual.blocks.24.attn.qkv.weight', 'visual.blocks.24.attn.proj.weight', 'visual.blocks.24.mlp.gate_proj.weight', 'visual.blocks.24.mlp.up_proj.weight', 'visual.blocks.24.mlp.down_proj.weight', 'visual.blocks.25.norm1.weight', 'visual.blocks.25.norm2.weight', 'visual.blocks.25.attn.qkv.weight', 'visual.blocks.25.attn.proj.weight', 'visual.blocks.25.mlp.gate_proj.weight', 'visual.blocks.25.mlp.up_proj.weight', 'visual.blocks.25.mlp.down_proj.weight', 'visual.blocks.26.norm1.weight', 'visual.blocks.26.norm2.weight', 'visual.blocks.26.attn.qkv.weight', 'visual.blocks.26.attn.proj.weight', 'visual.blocks.26.mlp.gate_proj.weight', 'visual.blocks.26.mlp.up_proj.weight', 'visual.blocks.26.mlp.down_proj.weight', 'visual.blocks.27.norm1.weight', 'visual.blocks.27.norm2.weight', 'visual.blocks.27.attn.qkv.weight', 'visual.blocks.27.attn.proj.weight', 'visual.blocks.27.mlp.gate_proj.weight', 'visual.blocks.27.mlp.up_proj.weight', 'visual.blocks.27.mlp.down_proj.weight', 'visual.blocks.28.norm1.weight', 'visual.blocks.28.norm2.weight', 'visual.blocks.28.attn.qkv.weight', 'visual.blocks.28.attn.proj.weight', 'visual.blocks.28.mlp.gate_proj.weight', 'visual.blocks.28.mlp.up_proj.weight', 'visual.blocks.28.mlp.down_proj.weight', 'visual.blocks.29.norm1.weight', 'visual.blocks.29.norm2.weight', 'visual.blocks.29.attn.qkv.weight', 'visual.blocks.29.attn.proj.weight', 'visual.blocks.29.mlp.gate_proj.weight', 'visual.blocks.29.mlp.up_proj.weight', 'visual.blocks.29.mlp.down_proj.weight', 'visual.blocks.30.norm1.weight', 'visual.blocks.30.norm2.weight', 'visual.blocks.30.attn.qkv.weight', 'visual.blocks.30.attn.proj.weight', 'visual.blocks.30.mlp.gate_proj.weight', 'visual.blocks.30.mlp.up_proj.weight', 'visual.blocks.30.mlp.down_proj.weight', 'visual.blocks.31.norm1.weight', 'visual.blocks.31.norm2.weight', 'visual.blocks.31.attn.qkv.weight', 'visual.blocks.31.attn.proj.weight', 'visual.blocks.31.mlp.gate_proj.weight', 'visual.blocks.31.mlp.up_proj.weight', 'visual.blocks.31.mlp.down_proj.weight', 'visual.merger.ln_q.weight', 'visual.merger.mlp.0.weight', 'visual.merger.mlp.2.weight']
2025-08-19T11:02:25.684499 - gguf qtypes: F32 (1087), BF16 (6), Q5_K (28), Q4_K (580), Q6_K (232)
2025-08-19T11:02:25.819373 - model weight dtype torch.bfloat16, manual cast: None
2025-08-19T11:02:25.823811 - model_type FLUX
2025-08-19T11:02:40.754737 - Requested to load WanVAE
2025-08-19T11:02:40.754737 - 0 models unloaded.
2025-08-19T11:02:40.931154 - loaded partially 128.0 127.9998779296875 0
2025-08-19T11:02:42.435748 - Requested to load QwenImageTEModel_
2025-08-19T11:04:48.363478 - loaded partially 5645.8 5645.798828125 0
2025-08-19T11:04:48.369500 - Attempting to release mmap (297)
2025-08-19T11:05:09.512972 - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (5400x1280 and 3840x1280)
2025-08-19T11:05:10.036168 - Traceback (most recent call last):
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_qwen.py", line 55, in encode
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 170, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_image.py", line 51, in encode_token_weights
out, pooled, extra = super().encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 686, in encode_token_weights
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 291, in encode
return self(tokens)
^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 253, in forward
embeds, attention_mask, num_tokens, embeds_info = self.process_tokens(tokens, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 219, in process_tokens
emb, extra = self.transformer.preprocess_embed(emb, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 384, in preprocess_embed
return self.visual(image.to(device, dtype=torch.float32), grid), grid
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 425, in forward
hidden_states = block(hidden_states, position_embeddings, cu_seqlens_now, optimized_attention=optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 252, in forward
hidden_states = self.attn(hidden_states, position_embeddings, cu_seqlens, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 195, in forward
qkv = self.qkv(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 110, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 217, in forward_comfy_cast_weights
out = super().forward_comfy_cast_weights(input, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward_comfy_cast_weights
return torch.nn.functional.linear(input, weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5400x1280 and 3840x1280)
2025-08-19T11:05:10.040217 - Prompt executed in 205.67 seconds
2025-08-19T11:08:11.905445 - got prompt
2025-08-19T11:08:12.308977 - Requested to load WanVAE
2025-08-19T11:08:14.714912 - 0 models unloaded.
2025-08-19T11:08:14.794369 - loaded partially 128.0 127.9998779296875 0
2025-08-19T11:08:15.533913 - 0 models unloaded.
2025-08-19T11:08:15.539425 - loaded partially 128.0 127.9998779296875 0
2025-08-19T11:08:16.048528 - Requested to load QwenImageTEModel_
2025-08-19T11:08:18.229413 - loaded partially 5645.8 5645.798828125 0
2025-08-19T11:08:18.347653 - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (5400x1280 and 3840x1280)
2025-08-19T11:08:18.350654 - Traceback (most recent call last):
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_qwen.py", line 55, in encode
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 170, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_image.py", line 51, in encode_token_weights
out, pooled, extra = super().encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 686, in encode_token_weights
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 291, in encode
return self(tokens)
^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 253, in forward
embeds, attention_mask, num_tokens, embeds_info = self.process_tokens(tokens, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 219, in process_tokens
emb, extra = self.transformer.preprocess_embed(emb, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 384, in preprocess_embed
return self.visual(image.to(device, dtype=torch.float32), grid), grid
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 425, in forward
hidden_states = block(hidden_states, position_embeddings, cu_seqlens_now, optimized_attention=optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 252, in forward
hidden_states = self.attn(hidden_states, position_embeddings, cu_seqlens, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 195, in forward
qkv = self.qkv(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 110, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 217, in forward_comfy_cast_weights
out = super().forward_comfy_cast_weights(input, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward_comfy_cast_weights
return torch.nn.functional.linear(input, weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5400x1280 and 3840x1280)
2025-08-19T11:08:18.355768 - Prompt executed in 6.45 seconds
2025-08-19T11:08:46.383409 - got prompt
2025-08-19T11:08:46.397709 - Requested to load WanVAE
2025-08-19T11:08:48.746865 - 0 models unloaded.
2025-08-19T11:08:48.818964 - loaded partially 128.0 127.9998779296875 0
2025-08-19T11:08:49.725146 - 0 models unloaded.
2025-08-19T11:08:49.731662 - loaded partially 128.0 127.9998779296875 0
2025-08-19T11:08:50.242586 - Requested to load QwenImageTEModel_
2025-08-19T11:08:52.555271 - loaded partially 5645.8 5645.798828125 0
2025-08-19T11:08:52.706135 - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (7920x1280 and 3840x1280)
2025-08-19T11:08:52.708129 - Traceback (most recent call last):
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_qwen.py", line 55, in encode
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 170, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_image.py", line 51, in encode_token_weights
out, pooled, extra = super().encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 686, in encode_token_weights
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 291, in encode
return self(tokens)
^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 253, in forward
embeds, attention_mask, num_tokens, embeds_info = self.process_tokens(tokens, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 219, in process_tokens
emb, extra = self.transformer.preprocess_embed(emb, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 384, in preprocess_embed
return self.visual(image.to(device, dtype=torch.float32), grid), grid
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 425, in forward
hidden_states = block(hidden_states, position_embeddings, cu_seqlens_now, optimized_attention=optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 252, in forward
hidden_states = self.attn(hidden_states, position_embeddings, cu_seqlens, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 195, in forward
qkv = self.qkv(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 110, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 217, in forward_comfy_cast_weights
out = super().forward_comfy_cast_weights(input, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward_comfy_cast_weights
return torch.nn.functional.linear(input, weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (7920x1280 and 3840x1280)
2025-08-19T11:08:52.712652 - Prompt executed in 6.33 seconds
2025-08-19T11:11:48.783025 - got prompt
2025-08-19T11:11:48.867699 - Requested to load WanVAE
2025-08-19T11:11:51.019846 - 0 models unloaded.
2025-08-19T11:11:51.084970 - loaded partially 128.0 127.9998779296875 0
2025-08-19T11:11:51.617416 - Requested to load QwenImageTEModel_
2025-08-19T11:11:53.669676 - loaded partially 5645.8 5645.798828125 0
2025-08-19T11:11:53.736537 - !!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (7920x1280 and 3840x1280)
2025-08-19T11:11:53.738541 - Traceback (most recent call last):
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_qwen.py", line 55, in encode
conditioning = clip.encode_from_tokens_scheduled(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 170, in encode_from_tokens_scheduled
pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 232, in encode_from_tokens
o = self.cond_stage_model.encode_token_weights(tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_image.py", line 51, in encode_token_weights
out, pooled, extra = super().encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 686, in encode_token_weights
out = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 45, in encode_token_weights
o = self.encode(to_encode)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 291, in encode
return self(tokens)
^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 253, in forward
embeds, attention_mask, num_tokens, embeds_info = self.process_tokens(tokens, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 219, in process_tokens
emb, extra = self.transformer.preprocess_embed(emb, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 384, in preprocess_embed
return self.visual(image.to(device, dtype=torch.float32), grid), grid
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 425, in forward
hidden_states = block(hidden_states, position_embeddings, cu_seqlens_now, optimized_attention=optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 252, in forward
hidden_states = self.attn(hidden_states, position_embeddings, cu_seqlens, optimized_attention)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\qwen_vl.py", line 195, in forward
qkv = self.qkv(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexa\Desktop\ComfyUI_windows_portable\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 110, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 217, in forward_comfy_cast_weights
out = super().forward_comfy_cast_weights(input, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI_ART_AND_Video\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward_comfy_cast_weights
return torch.nn.functional.linear(input, weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (7920x1280 and 3840x1280)
2025-08-19T11:11:53.743544 - Prompt executed in 4.96 seconds
Other
Can you fix this as soon as possible? Thx!
It's GGUF node issue. Replace "ClipLoaderGGUF" with "Load Clip" and download qwen_2.5_vl_7b_fp8_scaled.safetensors
Exactly the same issue im facing. Yes the qwen fp8 text encoder is working but the gguf version of it is not working. We need the gguf to work
Have you reported this to the GGUF node author? That'd be the best place to start.
The issue is caused by a missing/wrongly named mmproj file - https://github.com/city96/ComfyUI-GGUF/issues/329#issuecomment-3281779034.
It's GGUF node issue. Replace "ClipLoaderGGUF" with "Load Clip" and download qwen_2.5_vl_7b_fp8_scaled.safetensors
This fixed the problem for me. Thank you.
https://github.com/city96/ComfyUI-GGUF/issues/329#issuecomment-3281779034 or https://github.com/city96/ComfyUI-GGUF/issues/317
Try this. It works for me