ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

patch_cast_to.<locals>.cast_to() got an unexpected keyword argument 'stream'

Open zdxpan opened this issue 6 months ago • 1 comments

Expected Behavior

CLIPVisionLoader node clipvisionloader_329 = clipvisionloader.load_clip( clip_name="sigclip_vision_patch14_384.safetensors" )

File "/data/comfyui/workflows/replace_clothes_with_reference.py", line 372, in forward clipvisionencode_172 = self.clipvisionencode.encode( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/nodes.py", line 1009, in encode output = clip_vision.encode_image(image, crop=crop_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_vision.py", line 71, in encode_image out = self.model(pixel_values=pixel_values, intermediate_output=-2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 238, in forward x = self.vision_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 203, in forward x = self.embeddings(pixel_values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 176, in forward embeds = self.patch_embedding(pixel_values).flatten(2).transpose(1, 2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 112, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 107, in forward_comfy_cast_weights weight, bias = cast_bias_weight(self, input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 51, in cast_bias_weight bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function, stream=offload_stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: patch_cast_to..cast_to() got an unexpected keyword argument 'stream'

Actual Behavior

15739116, http://statica.xuantu.pro/x/prod/draft/2782455_95995d47d3ff47832e1cbbb58196ccc6.jpg, http://statica.xuantu.pro/x/prod/mask/2782455_240aa11db308e0031b9975b94ada3d9b.png, Japanese soft girl Qi Liu Hai Princess round face Shao Luo same style long curly hair, http://statica.xuantu.pro/x/prod/draft/2782455_3e0e3a9cc6d9168a6fb0f41bfcb83e89.JPG

CLIPVisionLoader node clipvisionloader_329 = clipvisionloader.load_clip( clip_name="sigclip_vision_patch14_384.safetensors" )

File "/data/comfyui/workflows/replace_clothes_with_reference.py", line 372, in forward clipvisionencode_172 = self.clipvisionencode.encode( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/nodes.py", line 1009, in encode output = clip_vision.encode_image(image, crop=crop_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_vision.py", line 71, in encode_image out = self.model(pixel_values=pixel_values, intermediate_output=-2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 238, in forward x = self.vision_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 203, in forward x = self.embeddings(pixel_values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 176, in forward embeds = self.patch_embedding(pixel_values).flatten(2).transpose(1, 2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 112, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 107, in forward_comfy_cast_weights weight, bias = cast_bias_weight(self, input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 51, in cast_bias_weight bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function, stream=offload_stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: patch_cast_to..cast_to() got an unexpected keyword argument 'stream'

Steps to Reproduce

15739116, http://statica.xuantu.pro/x/prod/draft/2782455_95995d47d3ff47832e1cbbb58196ccc6.jpg, http://statica.xuantu.pro/x/prod/mask/2782455_240aa11db308e0031b9975b94ada3d9b.png, Japanese soft girl Qi Liu Hai Princess round face Shao Luo same style long curly hair, http://statica.xuantu.pro/x/prod/draft/2782455_3e0e3a9cc6d9168a6fb0f41bfcb83e89.JPG

CLIPVisionLoader node clipvisionloader_329 = clipvisionloader.load_clip( clip_name="sigclip_vision_patch14_384.safetensors" )

File "/data/comfyui/workflows/replace_clothes_with_reference.py", line 372, in forward clipvisionencode_172 = self.clipvisionencode.encode( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/nodes.py", line 1009, in encode output = clip_vision.encode_image(image, crop=crop_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_vision.py", line 71, in encode_image out = self.model(pixel_values=pixel_values, intermediate_output=-2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 238, in forward x = self.vision_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 203, in forward x = self.embeddings(pixel_values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/clip_model.py", line 176, in forward embeds = self.patch_embedding(pixel_values).flatten(2).transpose(1, 2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 112, in forward return self.forward_comfy_cast_weights(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 107, in forward_comfy_cast_weights weight, bias = cast_bias_weight(self, input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/comfyui/comfy/ops.py", line 51, in cast_bias_weight bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function, stream=offload_stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: patch_cast_to..cast_to() got an unexpected keyword argument 'stream'

Debug Logs

15739116,
http://statica.xuantu.pro/x/prod/draft/2782455_95995d47d3ff47832e1cbbb58196ccc6.jpg,
http://statica.xuantu.pro/x/prod/mask/2782455_240aa11db308e0031b9975b94ada3d9b.png,
Japanese soft girl Qi Liu Hai Princess round face Shao Luo same style long curly hair,
http://statica.xuantu.pro/x/prod/draft/2782455_3e0e3a9cc6d9168a6fb0f41bfcb83e89.JPG


CLIPVisionLoader node 
clipvisionloader_329 = clipvisionloader.load_clip(
                clip_name="sigclip_vision_patch14_384.safetensors"
            )

  File "/data/comfyui/workflows/replace_clothes_with_reference.py", line 372, in forward
    clipvisionencode_172 = self.clipvisionencode.encode(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/nodes.py", line 1009, in encode
    output = clip_vision.encode_image(image, crop=crop_image)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/clip_vision.py", line 71, in encode_image
    out = self.model(pixel_values=pixel_values, intermediate_output=-2)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/clip_model.py", line 238, in forward
    x = self.vision_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/clip_model.py", line 203, in forward
    x = self.embeddings(pixel_values)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/clip_model.py", line 176, in forward
    embeds = self.patch_embedding(pixel_values).flatten(2).transpose(1, 2)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/llm/anaconda3/envs/webui2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/ops.py", line 112, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/ops.py", line 107, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/comfyui/comfy/ops.py", line 51, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function, stream=offload_stream)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: patch_cast_to.<locals>.cast_to() got an unexpected keyword argument 'stream'

Other

No response

zdxpan avatar May 21 '25 10:05 zdxpan

In my case (saw the same error) it was related to incompatibility in Comfy-WaveSpeed. Did you try without custom nodes?

See https://github.com/chengzeyi/Comfy-WaveSpeed/blob/main/init.py#L15

Should probably report it there, doesn't look like a bug in ComfyUI.

For some reason I can't reproduce it locally anymore, WaveSpeed does its monkey-patching but it doesn't stick (something else overwrites it?)

Acly avatar May 25 '25 12:05 Acly