ComfyUI-InstantID icon indicating copy to clipboard operation
ComfyUI-InstantID copied to clipboard

Some Mac (Apple Sillicon) issues

Open GillesVermeulen opened this issue 1 year ago • 7 comments

First of all, thanks for the swift work.

Couple of things I noticed on Mac:

  • Installing "requirements.txt" gives the error: "Could not find a version that satisfies the requirement onnxruntime-gpu (from versions: none)" -> There is no onnxruntime-gpu for Mac (as it depends on CUDA)
  • Using onnxruntime (non-gpu) instead, I get the following error: "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"

File "/Applications/ComfyUI/execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Applications/ComfyUI/execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Applications/ComfyUI/execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Applications/ComfyUI/custom_nodes/ComfyUI-InstantID/InstantIDNode.py", line 264, in generate_image output = pipe( ^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Applications/ComfyUI/custom_nodes/ComfyUI-InstantID/pipeline_stable_diffusion_xl_instantid.py", line 489, in __call__ ) = self.encode_prompt( ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py", line 381, in encode_prompt prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 816, in forward return self.text_model( ^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 725, in forward encoder_outputs = self.encoder( ^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 654, in forward layer_outputs = encoder_layer( ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 382, in forward hidden_states = self.layer_norm1(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/modules/normalization.py", line 196, in forward return F.layer_norm( ^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/nn/functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

GillesVermeulen avatar Jan 23 '24 14:01 GillesVermeulen

For mac, I used onnxruntime-silicon instead of onnxruntime-gpu

Also replaced: device = "cuda" if torch.cuda.is_available() else "cpu" with device = "cuda" if torch.cuda.is_available() else "mps"

And now the generation process begins but the resulting image is a solid black image.

garg avatar Jan 23 '24 19:01 garg

I'm getting the same errors on mac m2.

plaidam avatar Jan 23 '24 19:01 plaidam

I got it working on my mac m1.

First I replaced onnxruntime-gpu with onnxruntime-silicon in requirements.txt and then run a pip install.

Then I made the following change:

-device = "cuda" if torch.cuda.is_available() else "cpu"
+
+torch_dtype = torch.float16
+
+if torch.backends.mps.is_available():
+  device = "mps"
+  torch_dtype = torch.float32
+elif torch.cuda.is_available():
+  device = "cuda"
+else:
+  device = "cpu"
+

     def INPUT_TYPES(s):
         return {
             "required": {
-                "provider": (["CUDA", "CPU"], ),
+                "provider": (["CoreML", "CUDA", "CPU"], ),

     def load_idcontrolnet(self, controlnet_path):
 
-        controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
+        controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch_dtype)

-        self.to('cuda', dtype)
+        self.to('mps', dtype)

It's hacky at the moment but seems like I just needed to change dtype to be float32 and device to be mps

garg avatar Jan 23 '24 20:01 garg

@garg I tried your solution, but I got the errors below:

Error occurred when executing IDGenerationNode:

No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(20, 4800, 1, 64) (torch.float32)
key : shape=(20, 77, 1, 64) (torch.float32)
value : shape=(20, 77, 1, 64) (torch.float32)
attn_bias :
p : 0.0
`decoderF` is not supported because:
device=mps (supported: {'cuda'})
attn_bias type is
operator wasn't built - see `python -m xformers.info` for more info
`[email protected]` is not supported because:
device=mps (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
operator wasn't built - see `python -m xformers.info` for more info
`tritonflashattF` is not supported because:
device=mps (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
operator wasn't built - see `python -m xformers.info` for more info
triton is not available
`cutlassF` is not supported because:
device=mps (supported: {'cuda'})
operator wasn't built - see `python -m xformers.info` for more info
`smallkF` is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
device=mps (supported: {'cuda'})
operator wasn't built - see `python -m xformers.info` for more info
unsupported embed per head: 64

What's my problem ?

109km avatar Feb 03 '24 02:02 109km

For mac, I used onnxruntime-silicon instead of onnxruntime-gpu

Also replaced: device = "cuda" if torch.cuda.is_available() else "cpu" with device = "cuda" if torch.cuda.is_available() else "mps"

And now the generation process begins but the resulting image is a solid black image.

macbook pro m1 ,用这个方式解决了

jintianzhouyine avatar Feb 05 '24 08:02 jintianzhouyine

For mac, I used onnxruntime-silicon instead of onnxruntime-gpu Also replaced: device = "cuda" if torch.cuda.is_available() else "cpu" with device = "cuda" if torch.cuda.is_available() else "mps" And now the generation process begins but the resulting image is a solid black image.

macbook pro m1 ,用这个方式解决了

Sorry for the dumb question, but where should I replace "device = "cuda" if torch.cuda.is_available() else "cpu" with device = "cuda" if torch.cuda.is_available() else "mps" "?

Please help me to get this working on my mac m1 :(

riccardobiagi avatar Feb 06 '24 21:02 riccardobiagi

I got it working on my mac m1.

First I replaced onnxruntime-gpu with onnxruntime-silicon in requirements.txt and then run a pip install.

Then I made the following change:

-device = "cuda" if torch.cuda.is_available() else "cpu"
+
+torch_dtype = torch.float16
+
+if torch.backends.mps.is_available():
+  device = "mps"
+  torch_dtype = torch.float32
+elif torch.cuda.is_available():
+  device = "cuda"
+else:
+  device = "cpu"
+
     def INPUT_TYPES(s):
         return {
             "required": {
-                "provider": (["CUDA", "CPU"], ),
+                "provider": (["CoreML", "CUDA", "CPU"], ),

     def load_idcontrolnet(self, controlnet_path):
 
-        controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
+        controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch_dtype)

-        self.to('cuda', dtype)
+        self.to('mps', dtype)

It's hacky at the moment but seems like I just needed to change dtype to be float32 and device to be mps

tried this solution, still black image, strange

zealot09 avatar Mar 11 '24 16:03 zealot09