ComfyUI-WanVideoWrapper icon indicating copy to clipboard operation
ComfyUI-WanVideoWrapper copied to clipboard

duplicate template name

Open ShineHe2023 opened this issue 5 months ago • 1 comments

ComfyUI Error Report

Error Details

  • Node ID: 53
  • Node Type: WanVideoSampler
  • Exception Type: AssertionError
  • Exception Message: duplicate template name

Stack Trace

  File "D:\ComfyUI\execution.py", line 496, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)

  File "D:\ComfyUI\execution.py", line 315, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)

  File "D:\ComfyUI\execution.py", line 289, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 277, in process_inputs
    result = f(**inputs)

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 1518, in process
    transformer = compile_model(transformer, model["compile_args"])

  File "D:\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\utils.py", line 495, in compile_model
    transformer.blocks[i] = torch.compile(block, fullgraph=compile_args["fullgraph"], dynamic=compile_args["dynamic"], backend=compile_args["backend"], mode=compile_args["mode"])

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\__init__.py", line 2627, in compile
    return torch._dynamo.optimize(

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_dynamo\eval_frame.py", line 1138, in optimize
    return _optimize(rebuild_ctx, *args, **kwargs)

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_dynamo\eval_frame.py", line 1225, in _optimize
    backend.get_compiler_config()

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\__init__.py", line 2382, in get_compiler_config
    from torch._inductor.compile_fx import get_patched_config_dict

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_inductor\compile_fx.py", line 107, in <module>
    from .fx_passes.joint_graph import joint_graph_passes

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_inductor\fx_passes\joint_graph.py", line 27, in <module>
    from ..pattern_matcher import (

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_inductor\pattern_matcher.py", line 79, in <module>
    from .lowering import fallback_node_due_to_unsupported_type

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_inductor\lowering.py", line 7200, in <module>
    import_submodule(kernel)

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_dynamo\utils.py", line 3617, in import_submodule
    importlib.import_module(f"{mod.__name__}.{filename[:-3]}")

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_inductor\kernel\flex_attention.py", line 692, in <module>
    flex_attention_template = TritonTemplate(

  File "D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\torch\_inductor\select_algorithm.py", line 1354, in __init__
    assert name not in self.all_templates, "duplicate template name"


Total VRAM 32607 MB, total RAM 130328 MB
pytorch version: 2.9.0.dev20250802+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync
Using pytorch attention
Python version: 3.10.18 | packaged by Anaconda, Inc. | (main, Jun  5 2025, 13:08:55) [MSC v.1929 64 bit (AMD64)]
ComfyUI version: 0.3.49
ComfyUI frontend version: 1.23.4
[Prompt Server] web root: D:\ProgramData\miniconda3\envs\comfyui_20250804\lib\site-packages\comfyui_frontend_package\static
[Crystools INFO] Crystools version: 1.26.6
[Crystools INFO] Platform release: 10
[Crystools INFO] JETSON: Not detected.
[Crystools INFO] CPU: Intel(R) Core(TM) Ultra 9 285K - Arch: AMD64 - OS: Windows 10
[Crystools INFO] pynvml (NVIDIA) initialized.
[Crystools INFO] GPU/s:
[Crystools INFO] 0) NVIDIA GeForce RTX 5090
[Crystools INFO] NVIDIA Driver: 576.88

ShineHe2023 avatar Aug 08 '25 04:08 ShineHe2023

Same problem here, but started after upgrading to cuda 13.0 and torch 2.9.x. It also happens with the Comfy native TorchCompileModel node.

thekev avatar Nov 13 '25 18:11 thekev

Same problem here! Even after I downgraded PyTorch from 2.9.x to 2.8.0, it still happens. Are there any solutions to solve it?

Artessay avatar Dec 03 '25 11:12 Artessay