ComfyUI-WanVideoWrapper icon indicating copy to clipboard operation
ComfyUI-WanVideoWrapper copied to clipboard

Use fp8 Clip Text Encoder

Open andypotato opened this issue 9 months ago • 14 comments

The Clip Text Encoder umt5-xxl-enc-fp8_e4m3fn.safetensors is significantly smaller than the bf16 version (6.7 GB vs. 11.4 GB). However the WanVideo T5 Text Encoder node only supports fp16, fp32 and bf16.

Is it possible to add fp8 support for this node so the smaller text encoder can be used?

andypotato avatar Mar 01 '25 04:03 andypotato

@kijai It would be cool to have ability to run the whole pipeline under 6GB of VRAM. Maybe there is an option for CPU only Clip Text Encoder.

The Clip Text Encoder umt5-xxl-enc-fp8_e4m3fn.safetensors is significantly smaller than the bf16 version (6.7 GB vs. 11.4 GB). However the WanVideo T5 Text Encoder node only supports fp16, fp32 and bf16.

Is it possible to add fp8 support for this node so the smaller text encoder can be used?

The node does have quantization option.

In addition I added a bridge node so you can use the native ComfyUI text encoding, and also allowing using the native clip vision loader for I2V:

Image

kijai avatar Mar 03 '25 10:03 kijai

@kijai Sorry for the dumb question: Is that correct flow? Because I'm getting an error: mat1 and mat2 shapes cannot be multiplied (512x768 and 4096x1536)

Image

Image

@kijai It would be cool to have ability to run the whole pipeline under 6GB of VRAM. Maybe there is an option for CPU only Clip Text Encoder.

You mean the T5

@kijai Sorry for the dumb question: Is that correct flow? Because I'm getting an error: mat1 and mat2 shapes cannot be multiplied (512x768 and 4096x1536)

Image

Image

Yes but the point was for it to allow using the comfy-org shared models. You don't need to use this for my fp8 model, it doesn't work with that like it doesn't work in the native either.

kijai avatar Mar 03 '25 11:03 kijai

@kijai OMG! It works on RTX 2060 6GB!!! Not fast. But it works! Could be a reasonable solution for GPU poor.

Image

Image The only thing that I don't understand is that for some reason fp16 VAE without tiling doesn't give OOM while bf16 always gives OOM. I've used python main.py --lowvram argument.

Image

https://github.com/user-attachments/assets/1a255891-1d8e-4ef3-8502-4e433a3d83bd

Here are some more samples on RTX 2060 6GB VRAM (Workflow should be attached to the image):

Image Image

https://github.com/user-attachments/assets/d5a1a57d-74b1-4bbb-afa9-119de0df3df6

Adding WanVideo Teacache node reduced execution time almost two time with minor quality degradation:

Image Image

https://github.com/user-attachments/assets/8d45224a-a184-4dc9-93b3-fb3b43c6ce69

I'm trying to use multitalk, it's a workflow I just downloaded from a youtube video so I didn't make it but it I'm hitting the same issue with the same error because my text encoder is in GGUF format.

Image

@kijai Reading your comment "In addition I added a bridge node so you can use the native ComfyUI text encoding, and also allowing using the native clip vision loader for I2V" makes me think is a workaround but looking at the screenshot and playing around, I haven't been able to figure it out.

multitalk wan 2.1.json

Zod1234 avatar Aug 09 '25 22:08 Zod1234

I'm trying to use multitalk, it's a workflow I just downloaded from a youtube video so I didn't make it but it I'm hitting the same issue with the same error because my text encoder is in GGUF format.

Image [@kijai](https://github.com/kijai) Reading your comment "In addition I added a bridge node so you can use the native ComfyUI text encoding, and also allowing using the native clip vision loader for I2V" makes me think is a workaround but looking at the screenshot and playing around, I haven't been able to figure it out.

multitalk wan 2.1.json

Image

and that straight to the sampler.

kijai avatar Aug 09 '25 22:08 kijai

thanks for the quick response! I'm new to comyui and I was thinking I still had to still include the wanvideo textencode node somehow, but I got it working from your response and gained a little bit more understanding of how all this works, thx again.

Zod1234 avatar Aug 09 '25 22:08 Zod1234

More than likely a silly question, but how would we use the native ComfyUI text encoding now for V1.7.6? I'm assuming this is now very different to the other versions (Yes.. new to this game) but having a look at the WAN2.2 I2V as example, it just plugs directly into an everything anywhere node, and I cant see how to make changes?

kalvaer avatar Oct 20 '25 11:10 kalvaer

@kijai

Image

All the models are using either Q4_KM or FP16, so why do they always run into OOM errors? When I normally run wan2.2 s2v and wan2.2 animate, they are more than sufficient. Why does your node consume even more resources than those? WanVideo Block Swap makes no difference whether it is enabled or not — it still results in OOM.

Total VRAM 12282 MB, total RAM 16106 MB
pytorch version: 2.8.0+cu128
xformers version: 0.0.32.post2
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
Enabled pinned memory 7247.0
Using sage attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.3.71
ComfyUI frontend version: 1.30.6
[Prompt Server] web root: D:\ComfyUI\venv\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 12282 MB, total RAM 16106 MB
pytorch version: 2.8.0+cu128
xformers version: 0.0.32.post2
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync

Amazon90 avatar Nov 24 '25 08:11 Amazon90

@kijai

Image All the models are using either Q4_KM or FP16, so why do they always run into OOM errors? When I normally run wan2.2 s2v and wan2.2 animate, they are more than sufficient. Why does your node consume even more resources than those?所有模型都用 Q4_KM 或 FP16,为什么总是遇到 OOM 错误?我通常运行 wan2.2 s2v 和 wan2.2 animate 时,它们已经绰绰有余了。为什么你的节点消耗的资源比那些还多? WanVideo Block Swap makes no difference whether it is enabled or not — it still results in OOM.WanVideo 区块交换无论是否启用都无关紧要——它仍然会导致 OOM。
Total VRAM 12282 MB, total RAM 16106 MB
pytorch version: 2.8.0+cu128
xformers version: 0.0.32.post2
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
Enabled pinned memory 7247.0
Using sage attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.3.71
ComfyUI frontend version: 1.30.6
[Prompt Server] web root: D:\ComfyUI\venv\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 12282 MB, total RAM 16106 MB
pytorch version: 2.8.0+cu128
xformers version: 0.0.32.post2
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync

你的12G+16G的配置,据我的理解不足以运行infinitetalk,不过我没有尝试过Q4的模型。你可以尝试一下blockswap设置为10,看看占用情况,同时虚拟内存设置一个高数值,比如说50G。这样有足够的空间来缓存模型。 还有infinitetalk的节点,换那个带long的版本,如果还OOM,可以将这个节点中的上下文窗口调小,也许能够运行过去。

swan7-py avatar Nov 24 '25 08:11 swan7-py

@kijai Image All the models are using either Q4_KM or FP16, so why do they always run into OOM errors? When I normally run wan2.2 s2v and wan2.2 animate, they are more than sufficient. Why does your node consume even more resources than those?所有模型都用 Q4_KM 或 FP16,为什么总是遇到 OOM 错误?我通常运行 wan2.2 s2v 和 wan2.2 animate 时,它们已经绰绰有余了。为什么你的节点消耗的资源比那些还多? WanVideo Block Swap makes no difference whether it is enabled or not — it still results in OOM.WanVideo 区块交换无论是否启用都无关紧要——它仍然会导致 OOM。

Total VRAM 12282 MB, total RAM 16106 MB
pytorch version: 2.8.0+cu128
xformers version: 0.0.32.post2
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
Enabled pinned memory 7247.0
Using sage attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.3.71
ComfyUI frontend version: 1.30.6
[Prompt Server] web root: D:\ComfyUI\venv\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 12282 MB, total RAM 16106 MB
pytorch version: 2.8.0+cu128
xformers version: 0.0.32.post2
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync

你的12G+16G的配置,据我的理解不足以运行infinitetalk,不过我没有尝试过Q4的模型。你可以尝试一下blockswap设置为10,看看占用情况,同时虚拟内存设置一个高数值,比如说50G。这样有足够的空间来缓存模型。 还有infinitetalk的节点,换那个带long的版本,如果还OOM,可以将这个节点中的上下文窗口调小,也许能够运行过去。

ComfyUI Error Report

Error Details

  • Node ID: 128
  • Node Type: WanVideoSampler
  • Exception Type: torch.OutOfMemoryError
  • Exception Message: Allocation on device This error means you ran out of memory on your GPU.

TIPS: If the workflow worked before you might have accidentally set the batch_size to a large number.

Stack Trace

  File "D:\ComfyUI\execution.py", line 510, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 324, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\execution.py", line 298, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 286, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 3118, in process
    raise e

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 2271, in process
    y = vae.encode(padding_frames_pixels_values, device=device, tiled=tiled_vae, pbar=False).to(dtype)[0]
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 1327, in encode
    hidden_state = self.single_encode(video, device, pbar=pbar, sample=sample)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 1295, in single_encode
    x = self.model.encode(video, pbar=pbar, sample=sample)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 1029, in encode
    out_ = self.encoder(x[:, :, 1 + 4 * (i - 1):1 + 4 * i, :, :],
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 574, in forward
    x = layer(x, feat_cache, feat_idx)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 284, in forward
    x = layer(x, feat_cache[idx])
        ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 38, in forward
    x = F.pad(x, padding)
        ^^^^^^^^^^^^^^^^^

  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\functional.py", line 5290, in pad
    return torch._C._nn.pad(input, pad, mode, value)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • ComfyUI Version: 0.3.71
  • Arguments: D:\ComfyUI\main.py --auto-launch --preview-method auto --use-sage-attention --disable-cuda-malloc --fast
  • OS: nt
  • Python Version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.8.0+cu128

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 11572084736
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2025-11-24T16:45:54.773830 - [START] Security scan2025-11-24T16:45:54.773830 - 
2025-11-24T16:46:07.841705 - [DONE] Security scan2025-11-24T16:46:07.841705 - 
2025-11-24T16:46:08.181295 - ## ComfyUI-Manager: installing dependencies done.2025-11-24T16:46:08.181295 - 
2025-11-24T16:46:08.181295 - ** ComfyUI startup time:2025-11-24T16:46:08.181295 -  2025-11-24T16:46:08.181295 - 2025-11-24 16:46:08.1812025-11-24T16:46:08.181295 - 
2025-11-24T16:46:08.181295 - ** Platform:2025-11-24T16:46:08.181295 -  2025-11-24T16:46:08.181295 - Windows2025-11-24T16:46:08.181295 - 
2025-11-24T16:46:08.181295 - ** Python version:2025-11-24T16:46:08.182296 -  2025-11-24T16:46:08.182296 - 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]2025-11-24T16:46:08.182296 - 
2025-11-24T16:46:08.182296 - ** Python executable:2025-11-24T16:46:08.182296 -  2025-11-24T16:46:08.182296 - D:\ComfyUI\venv\Scripts\Python.exe2025-11-24T16:46:08.182296 - 
2025-11-24T16:46:08.182296 - ** ComfyUI Path:2025-11-24T16:46:08.182296 -  2025-11-24T16:46:08.182296 - D:\ComfyUI2025-11-24T16:46:08.182296 - 
2025-11-24T16:46:08.182296 - ** ComfyUI Base Folder Path:2025-11-24T16:46:08.182296 -  2025-11-24T16:46:08.182296 - D:\ComfyUI2025-11-24T16:46:08.182296 - 
2025-11-24T16:46:08.182296 - ** User directory:2025-11-24T16:46:08.182296 -  2025-11-24T16:46:08.182296 - D:\ComfyUI\user2025-11-24T16:46:08.182296 - 
2025-11-24T16:46:08.182296 - ** ComfyUI-Manager config path:2025-11-24T16:46:08.182296 -  2025-11-24T16:46:08.182296 - D:\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-11-24T16:46:08.183295 - 
2025-11-24T16:46:08.183295 - ** Log path:2025-11-24T16:46:08.183295 -  2025-11-24T16:46:08.183295 - D:\ComfyUI\user\comfyui.log2025-11-24T16:46:08.183295 - 
2025-11-24T16:46:19.526198 - 
Prestartup times for custom nodes:
2025-11-24T16:46:19.526198 -    0.0 seconds: D:\ComfyUI\custom_nodes\rgthree-comfy
2025-11-24T16:46:19.526799 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2025-11-24T16:46:19.526799 -   25.2 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Manager
2025-11-24T16:46:19.526799 - 
2025-11-24T16:46:23.544377 - Checkpoint files will always be loaded safely.
2025-11-24T16:46:23.678089 - Total VRAM 12282 MB, total RAM 16106 MB
2025-11-24T16:46:23.678089 - pytorch version: 2.8.0+cu128
2025-11-24T16:46:26.544787 - xformers version: 0.0.32.post2
2025-11-24T16:46:26.544787 - Enabled fp16 accumulation.
2025-11-24T16:46:26.545308 - Set vram state to: NORMAL_VRAM
2025-11-24T16:46:26.545308 - Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
2025-11-24T16:46:26.561314 - Enabled pinned memory 7247.0
2025-11-24T16:46:27.175731 - Using sage attention
2025-11-24T16:46:30.477442 - Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr  8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
2025-11-24T16:46:30.478446 - ComfyUI version: 0.3.71
2025-11-24T16:46:30.720839 - ComfyUI frontend version: 1.30.6
2025-11-24T16:46:30.723840 - [Prompt Server] web root: D:\ComfyUI\venv\Lib\site-packages\comfyui_frontend_package\static
2025-11-24T16:46:31.752624 - Total VRAM 12282 MB, total RAM 16106 MB
2025-11-24T16:46:31.753628 - pytorch version: 2.8.0+cu128
2025-11-24T16:46:31.753628 - xformers version: 0.0.32.post2
2025-11-24T16:46:31.753628 - Enabled fp16 accumulation.
2025-11-24T16:46:31.754627 - Set vram state to: NORMAL_VRAM
2025-11-24T16:46:31.754627 - Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
2025-11-24T16:46:31.770197 - Enabled pinned memory 7247.0
2025-11-24T16:46:39.646791 - [34m[ComfyUI-Easy-Use] server: [0mv1.3.4 [92mLoaded[0m2025-11-24T16:46:39.646791 - 
2025-11-24T16:46:39.646791 - [34m[ComfyUI-Easy-Use] web root: [0mD:\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v2 [92mLoaded[0m2025-11-24T16:46:39.646791 - 
2025-11-24T16:46:39.673503 - ComfyUI-GGUF: Allowing full torch compile
2025-11-24T16:46:39.760696 - [JoyCaption] ℹ️ No custom models found, skipping user-defined HF models.2025-11-24T16:46:39.761694 - 
2025-11-24T16:46:39.763714 - [JoyCaption GGUF] ℹ️ No custom models found, skipping user-defined GGUF models.2025-11-24T16:46:39.763714 - 
2025-11-24T16:46:39.818668 - ### Loading: ComfyUI-Manager (V3.37.1)
2025-11-24T16:46:39.819667 - [ComfyUI-Manager] network_mode: public
2025-11-24T16:46:39.820670 - [ComfyUI-Manager] Since --preview-method is set, ComfyUI-Manager's preview method feature will be ignored.
2025-11-24T16:46:40.227368 - ### ComfyUI Version: v0.3.71-6-gf66183a5 | Released on '2025-11-23'
2025-11-24T16:46:41.090662 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-11-24T16:46:41.153978 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-11-24T16:46:41.298877 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-11-24T16:46:41.381273 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-11-24T16:46:41.464984 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-11-24T16:46:42.181939 - ======================================== ComfyUI-nunchaku Initialization ========================================
2025-11-24T16:46:42.186821 - Nunchaku version: 1.0.2
2025-11-24T16:46:42.188821 - ComfyUI-nunchaku version: 1.0.2
2025-11-24T16:46:44.074177 - 'nunchaku_versions.json' not found. Node will start in minimal mode. Use 'update node' to fetch versions.2025-11-24T16:46:44.074177 - 
2025-11-24T16:46:44.074177 - =================================================================================================================
2025-11-24T16:46:44.406515 - [34m[ComfyUI-RMBG][0m v[93m2.9.0[0m | [93m32 nodes[0m [92mLoaded[0m2025-11-24T16:46:44.406515 - 
2025-11-24T16:46:45.542643 - ### Loading: SDPose OOD Nodes ###2025-11-24T16:46:45.542643 - 
2025-11-24T16:46:45.602977 - ⚡ SeedVR2 optimizations check: Flash Attention ✅ | Triton ✅2025-11-24T16:46:45.602977 - 
2025-11-24T16:46:45.672090 - 📊 Initial CUDA memory: 10.78GB free / 11.99GB total2025-11-24T16:46:45.672090 - 
2025-11-24T16:46:46.274875 - [36;20m[D:\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-11-24T16:46:46.275909 - [36;20m[D:\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-11-24T16:46:46.276507 - [36;20m[D:\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-11-24T16:46:46.622541 - 
2025-11-24T16:46:46.622541 - [92m[rgthree-comfy] Loaded 48 exciting nodes. 🎉[0m2025-11-24T16:46:46.622541 - 
2025-11-24T16:46:46.622541 - 
2025-11-24T16:46:46.630387 - 
Import times for custom nodes:
2025-11-24T16:46:46.630387 -    0.0 seconds: D:\ComfyUI\custom_nodes\websocket_image_save.py
2025-11-24T16:46:46.630387 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Inpaint-CropAndStitch
2025-11-24T16:46:46.631057 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-GGUF
2025-11-24T16:46:46.631057 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2025-11-24T16:46:46.631057 -    0.0 seconds: D:\ComfyUI\custom_nodes\cg-use-everywhere
2025-11-24T16:46:46.631057 -    0.0 seconds: D:\ComfyUI\custom_nodes\Comfyui-SecNodes
2025-11-24T16:46:46.631057 -    0.0 seconds: D:\ComfyUI\custom_nodes\rgthree-comfy
2025-11-24T16:46:46.631057 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-KJNodes
2025-11-24T16:46:46.631057 -    0.1 seconds: D:\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-11-24T16:46:46.631057 -    0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle_Advance
2025-11-24T16:46:46.631057 -    0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-JoyCaption
2025-11-24T16:46:46.631057 -    0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-SeedVR2_VideoUpscaler
2025-11-24T16:46:46.631057 -    0.2 seconds: D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper
2025-11-24T16:46:46.631057 -    0.2 seconds: D:\ComfyUI\custom_nodes\ComfyUI_LayerStyle
2025-11-24T16:46:46.631057 -    0.3 seconds: D:\ComfyUI\custom_nodes\comfyui-rmbg
2025-11-24T16:46:46.631057 -    0.4 seconds: D:\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2025-11-24T16:46:46.631057 -    0.8 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2025-11-24T16:46:46.631057 -    1.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Manager
2025-11-24T16:46:46.631057 -    1.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-SDPose-OOD
2025-11-24T16:46:46.631057 -    1.4 seconds: D:\ComfyUI\custom_nodes\ComfyUI-MelBandRoFormer
2025-11-24T16:46:46.631057 -    1.9 seconds: D:\ComfyUI\custom_nodes\ComfyUI-nunchaku
2025-11-24T16:46:46.631057 -    6.4 seconds: D:\ComfyUI\custom_nodes\ComfyUI-DyPE-Nunchaku
2025-11-24T16:46:46.631057 - 
2025-11-24T16:46:46.692208 - FETCH ComfyRegistry Data: 5/1082025-11-24T16:46:46.692208 - 
2025-11-24T16:46:47.266812 - Context impl SQLiteImpl.
2025-11-24T16:46:47.266812 - Will assume non-transactional DDL.
2025-11-24T16:46:47.269810 - No target revision found.
2025-11-24T16:46:47.373024 - Starting server

2025-11-24T16:46:47.374004 - To see the GUI go to: http://127.0.0.1:8188
2025-11-24T16:46:48.429283 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-24T16:46:48.440282 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-24T16:46:48.445288 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/widgetInputs.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-24T16:46:49.898383 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-24T16:46:49.907384 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version.
2025-11-24T16:46:52.188346 - FETCH ComfyRegistry Data: 10/1082025-11-24T16:46:52.188346 - 
2025-11-24T16:46:57.540112 - FETCH ComfyRegistry Data: 15/1082025-11-24T16:46:57.540112 - 
2025-11-24T16:47:02.514067 - FETCH ComfyRegistry Data: 20/1082025-11-24T16:47:02.514067 - 
2025-11-24T16:47:07.721433 - FETCH ComfyRegistry Data: 25/1082025-11-24T16:47:07.721433 - 
2025-11-24T16:47:13.861839 - FETCH ComfyRegistry Data: 30/1082025-11-24T16:47:13.861839 - 
2025-11-24T16:47:18.545834 - FETCH ComfyRegistry Data: 35/1082025-11-24T16:47:18.545834 - 
2025-11-24T16:47:23.239169 - FETCH ComfyRegistry Data: 40/1082025-11-24T16:47:23.239169 - 
2025-11-24T16:47:28.039780 - FETCH ComfyRegistry Data: 45/1082025-11-24T16:47:28.039780 - 
2025-11-24T16:47:32.856386 - FETCH ComfyRegistry Data: 50/1082025-11-24T16:47:32.856386 - 
2025-11-24T16:47:37.685318 - FETCH ComfyRegistry Data: 55/1082025-11-24T16:47:37.685318 - 
2025-11-24T16:47:42.743902 - FETCH ComfyRegistry Data: 60/1082025-11-24T16:47:42.743902 - 
2025-11-24T16:47:47.695263 - FETCH ComfyRegistry Data: 65/1082025-11-24T16:47:47.695263 - 
2025-11-24T16:47:52.825289 - FETCH ComfyRegistry Data: 70/1082025-11-24T16:47:52.825289 - 
2025-11-24T16:47:57.576991 - FETCH ComfyRegistry Data: 75/1082025-11-24T16:47:57.576991 - 
2025-11-24T16:48:02.311097 - FETCH ComfyRegistry Data: 80/1082025-11-24T16:48:02.311097 - 
2025-11-24T16:48:07.025724 - FETCH ComfyRegistry Data: 85/1082025-11-24T16:48:07.025724 - 
2025-11-24T16:48:11.978723 - FETCH ComfyRegistry Data: 90/1082025-11-24T16:48:11.978723 - 
2025-11-24T16:48:16.696039 - FETCH ComfyRegistry Data: 95/1082025-11-24T16:48:16.696039 - 
2025-11-24T16:48:21.397720 - FETCH ComfyRegistry Data: 100/1082025-11-24T16:48:21.397720 - 
2025-11-24T16:48:26.604796 - FETCH ComfyRegistry Data: 105/1082025-11-24T16:48:26.604796 - 
2025-11-24T16:48:29.994813 - FETCH ComfyRegistry Data [DONE]2025-11-24T16:48:29.994813 - 
2025-11-24T16:48:30.103622 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-11-24T16:48:30.129628 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-11-24T16:48:30.129628 - 2025-11-24T16:48:31.148243 -  [DONE]2025-11-24T16:48:31.148243 - 
2025-11-24T16:48:31.196817 - [ComfyUI-Manager] All startup tasks have been completed.
2025-11-24T16:49:47.068876 - got prompt
2025-11-24T16:49:53.183103 - [MultiTalk] --- Raw speaker lengths (samples) ---
2025-11-24T16:49:53.183103 -   speaker 1: 48000 samples (shape: torch.Size([1, 1, 48000]))
2025-11-24T16:49:53.183103 - [MultiTalk] Audio duration (75 frames) is shorter than requested (77 frames). Using 75 frames.
2025-11-24T16:49:53.183103 - [MultiTalk] total raw duration = 3.000s
2025-11-24T16:49:53.183103 - [MultiTalk] multi_audio_type=para | final waveform shape=torch.Size([1, 1, 48000]) | length=48000 samples | seconds=3.000s (expected max of raw)
2025-11-24T16:50:03.665034 - gguf qtypes: Q4_K (144), F32 (73), Q6_K (25)
2025-11-24T16:50:03.836094 - Attempting to recreate sentencepiece tokenizer from GGUF file metadata...
2025-11-24T16:50:16.505901 - Created tokenizer with vocab size of 256384
2025-11-24T16:50:17.083775 - Dequantizing token_embd.weight to prevent runtime OOM.
2025-11-24T16:50:22.976680 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-11-24T16:50:23.016682 - Requested to load WanTEModel
2025-11-24T16:50:27.262325 - loaded completely; 9544.67 MB usable, 4661.20 MB loaded, full load: True
2025-11-24T16:50:30.100887 - Requested to load CLIPVisionModelProjection
2025-11-24T16:50:30.116886 - loaded completely; 3659.37 MB usable, 1208.10 MB loaded, full load: True
2025-11-24T16:50:30.496750 - Clip embeds shape: torch.Size([1, 257, 1280]), dtype: torch.float32
2025-11-24T16:50:30.498748 - Combined clip embeds shape: torch.Size([1, 257, 1280])
2025-11-24T16:50:34.457842 - CUDA Compute Capability: 8.9
2025-11-24T16:50:34.460842 - Detected model in_channels: 36
2025-11-24T16:50:34.460842 - Model cross attention type: i2v, num_heads: 40, num_layers: 40
2025-11-24T16:50:34.461842 - Model variant detected: i2v_480
2025-11-24T16:50:34.785341 - InfiniteTalk detected, patching model...
2025-11-24T16:50:34.930342 - model_type FLOW
2025-11-24T16:50:34.941342 - Loading LoRA: wan21\wan2.1_i2v_lora_rank64_lightx2v_4step with strength: 1.0
2025-11-24T16:50:35.234212 - Using GGUF to load and assign model weights to device...
2025-11-24T16:50:59.901940 - ------- Scheduler info -------
2025-11-24T16:50:59.988940 - Total timesteps: tensor([999, 982, 956, 916, 846, 687], device='cuda:0')
2025-11-24T16:50:59.992940 - Using timesteps: tensor([999, 982, 956, 916, 846, 687], device='cuda:0')
2025-11-24T16:51:00.013944 - Using sigmas: tensor([1.0000, 0.9821, 0.9565, 0.9167, 0.8461, 0.6875, 0.0000])
2025-11-24T16:51:00.014947 - ------------------------------
2025-11-24T16:51:00.015946 - sigmas: tensor([1.0000, 0.9821, 0.9565, 0.9167, 0.8461, 0.6875, 0.0000])
2025-11-24T16:51:00.043922 - Multitalk audio features shapes (per speaker): [(75, 12, 768)]
2025-11-24T16:51:02.290821 - Multitalk mode: infinitetalk
2025-11-24T16:51:02.305364 - Sampling 75 frames in 1 windows, at 480x480 with 6 steps
2025-11-24T16:51:44.778979 - Error during model prediction: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

2025-11-24T16:51:45.414874 - Error during sampling: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

2025-11-24T16:51:47.993558 - !!! Exception during processing !!! CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

2025-11-24T16:51:48.007560 - Traceback (most recent call last):
  File "D:\ComfyUI\execution.py", line 510, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\execution.py", line 324, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\execution.py", line 298, in _async_map_node_over_list
    await process_inputs(input_dict, i)
  File "D:\ComfyUI\execution.py", line 286, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 3118, in process
    raise e
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 2405, in process
    noise_pred, _, self.cache_state = predict_with_cfg(
                                      ^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 1599, in predict_with_cfg
    raise e
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 1469, in predict_with_cfg
    noise_pred_cond, noise_pred_ovi, cache_state_cond = transformer(
                                                        ^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 2950, in forward
    block.to(self.offload_device, non_blocking=self.use_non_blocking)
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1369, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 928, in _apply
    module._apply(fn)
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 928, in _apply
    module._apply(fn)
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 955, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1355, in convert
    return t.to(
           ^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\gguf\gguf_utils.py", line 317, in __torch_function__
    result = super().__torch_function__(func, types, args, kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.AcceleratorError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


2025-11-24T16:51:48.025558 - Prompt executed in 120.92 seconds
2025-11-24T16:52:45.916939 - got prompt
2025-11-24T16:52:46.381587 - CUDA Compute Capability: 8.9
2025-11-24T16:52:46.381587 - Detected model in_channels: 36
2025-11-24T16:52:46.381913 - Model cross attention type: i2v, num_heads: 40, num_layers: 40
2025-11-24T16:52:46.381913 - Model variant detected: i2v_480
2025-11-24T16:52:46.692727 - InfiniteTalk detected, patching model...
2025-11-24T16:52:46.806647 - model_type FLOW
2025-11-24T16:52:46.814338 - Loading LoRA: wan21\wan2.1_i2v_lora_rank64_lightx2v_4step with strength: 1.0
2025-11-24T16:52:46.883081 - Using GGUF to load and assign model weights to device...
2025-11-24T16:53:33.873340 - ------- Scheduler info -------
2025-11-24T16:53:33.915359 - Total timesteps: tensor([999, 982, 956, 916, 846, 687], device='cuda:0')
2025-11-24T16:53:33.917359 - Using timesteps: tensor([999, 982, 956, 916, 846, 687], device='cuda:0')
2025-11-24T16:53:33.931339 - Using sigmas: tensor([1.0000, 0.9821, 0.9565, 0.9167, 0.8461, 0.6875, 0.0000])
2025-11-24T16:53:33.932339 - ------------------------------
2025-11-24T16:53:33.933341 - sigmas: tensor([1.0000, 0.9821, 0.9565, 0.9167, 0.8461, 0.6875, 0.0000])
2025-11-24T16:53:33.953338 - Multitalk audio features shapes (per speaker): [(75, 12, 768)]
2025-11-24T16:53:37.800090 - Multitalk mode: infinitetalk
2025-11-24T16:53:37.813116 - Sampling 75 frames in 2 windows, at 480x480 with 6 steps
2025-11-24T16:54:47.176444 - Error during sampling: Allocation on device 
2025-11-24T16:54:57.454766 - !!! Exception during processing !!! Allocation on device 
2025-11-24T16:54:57.475225 - Traceback (most recent call last):
  File "D:\ComfyUI\execution.py", line 510, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\execution.py", line 324, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\execution.py", line 298, in _async_map_node_over_list
    await process_inputs(input_dict, i)
  File "D:\ComfyUI\execution.py", line 286, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 3118, in process
    raise e
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes_sampler.py", line 2271, in process
    y = vae.encode(padding_frames_pixels_values, device=device, tiled=tiled_vae, pbar=False).to(dtype)[0]
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 1327, in encode
    hidden_state = self.single_encode(video, device, pbar=pbar, sample=sample)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 1295, in single_encode
    x = self.model.encode(video, pbar=pbar, sample=sample)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 1029, in encode
    out_ = self.encoder(x[:, :, 1 + 4 * (i - 1):1 + 4 * i, :, :],
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 574, in forward
    x = layer(x, feat_cache, feat_idx)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 284, in forward
    x = layer(x, feat_cache[idx])
        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\wan_video_vae.py", line 38, in forward
    x = F.pad(x, padding)
        ^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\venv\Lib\site-packages\torch\nn\functional.py", line 5290, in pad
    return torch._C._nn.pad(input, pad, mode, value)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: Allocation on device 

2025-11-24T16:54:57.476225 - Got an OOM, unloading all loaded models.
2025-11-24T16:54:57.484645 - Prompt executed in 131.54 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

Workflow too large. Please manually upload the workflow from local file system.

Additional Context

(Please add any additional context or steps to reproduce the error here)

Amazon90 avatar Nov 24 '25 08:11 Amazon90

Image

It doesn't work for me

Amazon90 avatar Nov 24 '25 08:11 Amazon90