ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

RuntimeError: could not create a primitive

Open Kashouryo opened this issue 1 year ago • 10 comments

ComfyUI Error Report

Error Details

  • Node Type: VAEEncode
  • Exception Type: RuntimeError
  • Exception Message: could not create a primitive

Stack Trace

  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • ComfyUI Version: v0.2.3-13-g7390ff3
  • Arguments: main.py --lowvram
  • OS: posix
  • Python Version: 3.11.9 | Intel Corporation | (main, Aug 12 2024, 23:58:22) [GCC 14.1.0]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cxx11.abi

Devices

  • Name: xpu
    • Type: xpu
    • VRAM Total: 16225243136
    • VRAM Free: 15916800512
    • Torch VRAM Total: 2602565632
    • Torch VRAM Free: 2294123008

Logs

2024-10-17 15:35:53,321 - root - INFO - Total VRAM 15474 MB, total RAM 128731 MB
2024-10-17 15:35:53,321 - root - INFO - pytorch version: 2.3.1+cxx11.abi
2024-10-17 15:35:53,321 - root - INFO - Set vram state to: LOW_VRAM
2024-10-17 15:35:53,331 - root - INFO - Device: xpu
2024-10-17 15:35:53,336 - root - INFO - Using pytorch cross attention
2024-10-17 15:35:53,656 - root - INFO - [Prompt Server] web root: /home/shouryo/Software/ComfyUI/web
2024-10-17 15:35:53,865 - root - INFO - Total VRAM 15474 MB, total RAM 128731 MB
2024-10-17 15:35:53,865 - root - INFO - pytorch version: 2.3.1+cxx11.abi
2024-10-17 15:35:53,866 - root - INFO - Set vram state to: LOW_VRAM
2024-10-17 15:35:53,866 - root - INFO - Device: xpu
2024-10-17 15:35:54,811 - root - INFO - 
Import times for custom nodes:
2024-10-17 15:35:54,811 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/websocket_image_save.py
2024-10-17 15:35:54,811 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Inpaint-CropAndStitch
2024-10-17 15:35:54,811 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-OpenPose-Editor
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-WD14-Tagger
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_essentials
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-KJNodes
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_tinyterraNodes
2024-10-17 15:35:54,812 - root - INFO -    0.0 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Manager
2024-10-17 15:35:54,812 - root - INFO -    0.1 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_FizzNodes
2024-10-17 15:35:54,812 - root - INFO -    0.4 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
2024-10-17 15:35:54,812 - root - INFO -    0.4 seconds: /home/shouryo/Software/ComfyUI/custom_nodes/ComfyUI_Custom_Nodes_AlekPet
2024-10-17 15:35:54,812 - root - INFO - 
2024-10-17 15:35:54,818 - root - INFO - Starting server

2024-10-17 15:35:54,818 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-10-17 15:40:27,249 - root - INFO - got prompt
2024-10-17 15:40:28,811 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-10-17 15:40:28,812 - root - INFO - model_type EPS
2024-10-17 15:40:34,409 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:34,410 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:35,846 - root - INFO - Requested to load SDXLClipModel
2024-10-17 15:40:35,846 - root - INFO - Loading 1 new model
2024-10-17 15:40:35,853 - root - INFO - loaded completely 0.0 1560.802734375 True
2024-10-17 15:40:38,966 - root - INFO - Requested to load AutoencoderKL
2024-10-17 15:40:38,967 - root - INFO - Loading 1 new model
2024-10-17 15:40:39,034 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-10-17 15:40:40,408 - root - ERROR - !!! Exception during processing !!! could not create a primitive
2024-10-17 15:40:40,409 - root - ERROR - Traceback (most recent call last):
  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not create a primitive

2024-10-17 15:40:40,410 - root - INFO - Prompt executed in 13.16 seconds
2024-10-17 15:40:43,815 - root - INFO - got prompt
2024-10-17 15:40:45,013 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-10-17 15:40:45,013 - root - INFO - model_type EPS
2024-10-17 15:40:48,509 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:48,509 - root - INFO - Using pytorch attention in VAE
2024-10-17 15:40:49,006 - root - INFO - Requested to load SD1ClipModel
2024-10-17 15:40:49,006 - root - INFO - Loading 1 new model
2024-10-17 15:40:49,008 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-10-17 15:40:49,657 - root - INFO - Requested to load AutoencoderKL
2024-10-17 15:40:49,657 - root - INFO - Loading 1 new model
2024-10-17 15:40:49,727 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-10-17 15:40:49,802 - root - ERROR - !!! Exception during processing !!! could not create a primitive
2024-10-17 15:40:49,802 - root - ERROR - Traceback (most recent call last):
  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not create a primitive

2024-10-17 15:40:49,803 - root - INFO - Prompt executed in 5.99 seconds
2024-10-17 15:40:51,502 - root - INFO - got prompt
2024-10-17 15:40:51,526 - root - ERROR - !!! Exception during processing !!! could not create a primitive
2024-10-17 15:40:51,526 - root - ERROR - Traceback (most recent call last):
  File "/home/shouryo/Software/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/shouryo/Software/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/nodes.py", line 310, in encode
    t = vae.encode(pixels[:,:,:,:3])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/sd.py", line 355, in encode
    samples[x:x+batch_number] = self.first_stage_model.encode(pixels_in).to(self.output_device).float()
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/models/autoencoder.py", line 179, in encode
    z = self.encoder(x)
        ^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 531, in forward
    h = self.mid.attn_1(h)
        ^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/miniconda3/envs/comfyenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 287, in forward
    h_ = self.optimized_attention(q, k, v)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/shouryo/Software/ComfyUI/comfy/ldm/modules/diffusionmodules/model.py", line 235, in pytorch_attention
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: could not create a primitive

2024-10-17 15:40:51,527 - root - INFO - Prompt executed in 0.02 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":20,"last_link_id":27,"nodes":[{"id":8,"type":"VAEDecode","pos":{"0":1209,"1":188},"size":{"0":210,"1":46},"flags":{},"order":8,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7},{"name":"vae","type":"VAE","link":17}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":12,"type":"VAEEncode","pos":{"0":614.97998046875,"1":707.6800537109375},"size":{"0":210,"1":46},"flags":{},"order":4,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":27},{"name":"vae","type":"VAE","link":16}],"outputs":[{"name":"LATENT","type":"LATENT","links":[11],"slot_index":0}],"properties":{"Node name for S&R":"VAEEncode"},"widgets_values":[]},{"id":7,"type":"CLIPTextEncode","pos":{"0":413,"1":389},"size":{"0":425.27801513671875,"1":180.6060791015625},"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":22}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["watermark, text, logo",true]},{"id":20,"type":"ImageResizeKJ","pos":{"0":161,"1":689},"size":{"0":315,"1":266},"flags":{},"order":2,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":25},{"name":"get_image_size","type":"IMAGE","link":null,"shape":7},{"name":"width_input","type":"INT","link":null,"widget":{"name":"width_input"}},{"name":"height_input","type":"INT","link":null,"widget":{"name":"height_input"}}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[27],"slot_index":0,"shape":3},{"name":"width","type":"INT","links":null,"shape":3},{"name":"height","type":"INT","links":null,"shape":3}],"properties":{"Node name for S&R":"ImageResizeKJ"},"widgets_values":[1000,1560,"bilinear",false,2,0,0,"disabled"]},{"id":19,"type":"LoraLoader","pos":{"0":81,"1":219},"size":{"0":315,"1":126},"flags":{},"order":3,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":23},{"name":"clip","type":"CLIP","link":20}],"outputs":[{"name":"MODEL","type":"MODEL","links":[24],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":[21,22],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["佐倉おりこSDXL_LoHA.safetensors",1,1]},{"id":3,"type":"KSampler","pos":{"0":853,"1":182},"size":{"0":315,"1":474},"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":24},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":11}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[506667112414263,"randomize",20,8,"dpmpp_2m","karras",0.6]},{"id":9,"type":"SaveImage","pos":{"0":524,"1":928},"size":{"0":442.30426025390625,"1":528.2882080078125},"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":18,"type":"LoadImage","pos":{"0":-181,"1":912},"size":{"0":512.4854125976562,"1":524.7637329101562},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[25],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["image-5-1024x683.png","image"]},{"id":6,"type":"CLIPTextEncode","pos":{"0":415,"1":186},"size":{"0":422.84503173828125,"1":164.31304931640625},"flags":{},"order":5,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":21}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["white text on red background",true]},{"id":14,"type":"CheckpointLoaderSimple","pos":{"0":-259,"1":327},"size":{"0":315,"1":98},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[23],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":[20],"slot_index":1,"shape":3},{"name":"VAE","type":"VAE","links":[16,17],"slot_index":2,"shape":3}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["sd-v1-5-inpainting.safetensors"]}],"links":[[4,6,0,3,1,"CONDITIONING"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[9,8,0,9,0,"IMAGE"],[11,12,0,3,3,"LATENT"],[16,14,2,12,1,"VAE"],[17,14,2,8,1,"VAE"],[20,14,1,19,1,"CLIP"],[21,19,1,6,0,"CLIP"],[22,19,1,7,0,"CLIP"],[23,14,0,19,0,"MODEL"],[24,19,0,3,0,"MODEL"],[25,18,0,20,0,"IMAGE"],[27,20,0,12,0,"IMAGE"]],"groups":[{"title":"Loading images","bounding":[150,630,726,171],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.9090909090909091,"offset":[423.38601410120583,-117.7985724350568]}},"version":0.4}

Additional Context

Ubuntu (Linux Mint 22.0) Intel Arc A770 16GB ASRock Challenger

Kashouryo avatar Oct 17 '24 19:10 Kashouryo

I just tested your workflow and it works to me (but I have a completely different setup).

Have you tried upgrading ComfyUI and all your custom nodes (especially KJNodes)? Maybe your KJNodes is outdated and the VAEEncode error is due to Resize Image node. You can try bypassing it and see if the error is gone.

I also see that you used a SDXL LoHA with a SD1.5 model, they won't work together.

LukeG89 avatar Oct 17 '24 20:10 LukeG89

It also happens with regular KSampler. I will disable all my nodes and see how it goes

Kashouryo avatar Oct 18 '24 01:10 Kashouryo

Yup, after disabling all my nodes, it still happens. Note that I am using Intel Arc A770 with IPEX under Linux Mint 22

Kashouryo avatar Oct 18 '24 01:10 Kashouryo

I also got this issue on my dual-Arc computer. Further information: it's originated from OneDNN's error code -6, which is out of memory.

There's a previous discussion here: https://github.com/oneapi-src/oneDNN/issues/914

It happened after an apt upgrade which I believe it might be an intel side issue.

LovelyA72 avatar Oct 18 '24 19:10 LovelyA72

I updated the system with the latest Intel OneAPI software. It is still having issues with the same error. Currently I can only generate images with CPU only which is extremely slow. Any other users with the same issue PLEASE CHIME IN!!! You guys are not alone!

Kashouryo avatar Oct 25 '24 16:10 Kashouryo

I believe this is related to your Pytorch version. 2.3 doesn't have a direct support for your GPU and IPEX solution is not always working well.

You might try to downgrade Pytorch to 2.1.2 (torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2) and IPEX to 2.1.100 and test it.

Also check this out Pytorch 2.5 now has Intel GPU support! This is fascinating news, because you will not need IPEX.

https://pytorch.org/blog/intel-gpu-support-pytorch-2-5/

I don't suggest to install the latest version immediately considering it is quite new, many dependent libraries might have issues. But sooner or later it should work well, it will also be faster (estimated performance increase 15%-30%).

uyanikfatih avatar Oct 30 '24 00:10 uyanikfatih

Update on my side: I switched to a RX6900XT and everything works fine now

LovelyA72 avatar Nov 01 '24 21:11 LovelyA72

Sorry, I should've seen this earlier but have been busy with personal affairs. I am looking at the logs and it does seem like IPEX does recognize the card and starts up but then goes off the rails. From reading the thread, it seems like this is only an issue if there is two Intel GPUs on board right? Because it is working fine on my system but I only have a single GPU.

Also check this out Pytorch 2.5 now has Intel GPU support! This is fascinating news, because you will not need IPEX.

You will still need IPEX for speed because 2.5 is slow without the optimization functionality IPEX has. It remains to be seen when those optimizations will be upstreamed to Pytorch but hopefully soon. ComfyUI has had support since early fall to support both IPEX and regular Pytorch with XPU support.

Edit: Can one of you run sycl-ls in your ComfyUI Python environment? This should be included with any oneAPI install. You should see an output like this:

> sycl-ls
[opencl:gpu][opencl:0] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO  [24.26.30049.6]
[opencl:cpu][opencl:1] Intel(R) OpenCL, AMD Ryzen 9 5950X 16-Core Processor             OpenCL 3.0 (Build 0) [2024.18.10.0.08_160000]
[level_zero:gpu][level_zero:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.30049]

simonlui avatar Nov 03 '24 18:11 simonlui

I only run one a770 and had the same problem.

>sycl-ls
[level_zero:gpu][level_zero:0] Intel(R) oneAPI Unified Runtime over Level-Zero, Intel(R) Arc(TM) A770 Graphics 12.55.8 [1.6.31294+20]
[opencl:cpu][opencl:0] Intel(R) OpenCL, AMD Ryzen 5 5600G with Radeon Graphics          OpenCL 3.0 (Build 0) [2024.18.10.0.08_160000]
[opencl:cpu][opencl:1] Intel(R) OpenCL, AMD Ryzen 5 5600G with Radeon Graphics          OpenCL 3.0 (Build 0) [2024.17.5.0.08_160000.xmain-hotfix]
[opencl:gpu][opencl:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO  [24.39.31294]

Someone above mentioned this happening after an upgrade, which was also what happened to me. I tested some downgrades and when I downgraded intel-opencl-icd to 24.22.29735.27-914~24.04 everything worked again.

>sycl-ls
[level_zero:gpu][level_zero:0] Intel(R) oneAPI Unified Runtime over Level-Zero, Intel(R) Arc(TM) A770 Graphics 12.55.8 [1.3.29735+27]
[opencl:cpu][opencl:0] Intel(R) OpenCL, AMD Ryzen 5 5600G with Radeon Graphics          OpenCL 3.0 (Build 0) [2024.18.10.0.08_160000]
[opencl:cpu][opencl:1] Intel(R) OpenCL, AMD Ryzen 5 5600G with Radeon Graphics          OpenCL 3.0 (Build 0) [2024.17.5.0.08_160000.xmain-hotfix]
[opencl:gpu][opencl:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO  [24.22.29735.27]

DrScarab avatar Nov 05 '24 21:11 DrScarab

Oh okay, I guess this must be distro specific. I usually pull the latest package releases from https://github.com/intel/compute-runtime/releases/ to avoid having problems with the runtime and to ensure I get the latest.

simonlui avatar Nov 06 '24 06:11 simonlui