ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

`RuntimeError: Non-uniform work-groups are not supported by the target device` on B580.

Open bedovyy opened this issue 5 months ago • 0 comments
trafficstars

Expected Behavior

CLIPVisionEncode node passes CLIP_VISION_OUTPUT.

Actual Behavior

RuntimeError: Non-uniform work-groups are not supported by the target device occurs.

Steps to Reproduce

Image

Debug Logs

got prompt
!!! Exception during processing !!! Non-uniform work-groups are not supported by the target device
Traceback (most recent call last):
  File "/ai/Projects/ComfyUI/execution.py", line 349, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ai/Projects/ComfyUI/execution.py", line 224, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ai/Projects/ComfyUI/execution.py", line 196, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/ai/Projects/ComfyUI/execution.py", line 185, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ai/Projects/ComfyUI/nodes.py", line 1009, in encode
    output = clip_vision.encode_image(image, crop=crop_image)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ai/Projects/ComfyUI/comfy/clip_vision.py", line 74, in encode_image
    pixel_values = clip_preprocess(image.to(self.load_device), size=self.image_size, mean=self.image_mean, std=self.image_std, crop=crop).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/ai/Projects/ComfyUI/comfy/clip_vision.py", line 36, in clip_preprocess
    image = torch.nn.functional.interpolate(image, size=scale_size, mode="bicubic", antialias=True)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/bedovyy/miniforge3/envs/torch280/lib/python3.12/site-packages/torch/nn/functional.py", line 4799, in interpolate
    return torch._C._nn._upsample_bicubic2d_aa(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Non-uniform work-groups are not supported by the target device

Prompt executed in 0.04 seconds

Other

I tested some to pass the clip vision.

  • Use CPU for vision clip.
    • --lowvram fix the issue, but device for vision clip is same as clip currently, so the option makes clip too slow.
  • Change antialias=False on torch.nn.functional.interpolate
    • It also work, but the output must be different.
  • Change scale_size to 256 (from 224) on torch.nn.functional.interpolate
    • It passes clip vision, but it seems not good idea

I suggest one of them.

  • add device option on CLIPVisionLoader node so user can select device
  • set antialias=False when encode fails as below code.

clip_vision.py:36

        try:
            image = torch.nn.functional.interpolate(image, size=scale_size, mode="bicubic", antialias=True)
        except:
            image = torch.nn.functional.interpolate(image, size=scale_size, mode="bicubic", antialias=False)

bedovyy avatar May 22 '25 16:05 bedovyy