ComfyUI-PhotoMaker-ZHO
ComfyUI-PhotoMaker-ZHO copied to clipboard
cutlassF: no kernel found to launch!
Error occurred when executing PhotoMaker_Generation:
cutlassF: no kernel found to launch!
File "F:\ComfyUI\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-PhotoMaker\PhotoMakerNode.py", line 247, in generate_image output = pipe( ^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-PhotoMaker\pipeline.py", line 442, in call noise_pred = self.unet( ^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\diffusers\models\unet_2d_condition.py", line 1112, in forward sample, res_samples = downsample_block( ^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1160, in forward hidden_states = attn( ^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\diffusers\models\transformer_2d.py", line 392, in forward hidden_states = block( ^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\diffusers\models\attention.py", line 329, in forward attn_output = self.attn1( ^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py", line 527, in forward return self.processor( ^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py", line 1259, in call hidden_states = F.scaled_dot_product_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
need to open PhotoMakerNode.py, open it and search for the bfloat16 keyword, change it to float16 in order to use it properly, hopefully it will be a reference for some of you, my graphics card is 1070TI,I hope this helps. I know some people with 2080TI's have this problem too
bfloat16 changed it to float16 Generated 950 sec 15 steps and then issued this Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated: 7.79 GiB Requested: 128.00 MiB Device limit: 4.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB
Your video card memory over the limit, to update the latest graphics card driver allows video memory over the limit, in this case the generation speed will be much slower, may be a image generation time will be half an hour!
I have the latest driver. I think the problem is not with the driver. When I plugged in the wrong Clipvision model into the IP Adapter, they also wrote to me about memory. There's something wrong with the model or VAE. In any case, I won’t wait half an hour for a picture. It takes me 5 minutes to generate a regular XL image. So I won't be able to use PhotoMaker.