ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

I get error in comfy ui always like CustomOps Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported. when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.

Open OPACUSTECH opened this issue 1 year ago • 1 comments

Screenshot (139)

OPACUSTECH avatar Feb 18 '24 07:02 OPACUSTECH

Same for me

Yunowhoit avatar Feb 18 '24 16:02 Yunowhoit

same thing appears and slows down the speed of everything if onnxruntime-1.17.1 is installed install onnxruntime==1.16.2 onnxruntime-gpu==1.16.2. Get:

M:\_SD\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")

And look in the task manager, everything is executed on the processor, BUT at the same time everything runs faster and there are no errors in the console after each launch. strange... I would like to check onnxruntime-1.17.1 but have ['CUDAExecutionProvider', 'CPUExecutionProvider'] be the default. Is it possible to do this somehow?

Ratinod avatar Mar 06 '24 03:03 Ratinod

Get tensorrt from https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-zip and add dlls and it's... it makes everything worse... and i get new error message: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32 as a result I get >40 seconds of "rembg" execution (WAS node suite)

with: ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying 7.5 seconds

and with onnxruntime==1.16.2 onnxruntime-gpu==1.16.2 (...Onnxruntime not found or doesn't come with acceleration providers...) (CPU mode) 2.51 seconds Yeah... acceleration should speed up, not slow down...

Ratinod avatar Mar 06 '24 12:03 Ratinod

I'm having the same problem. As @Ratinod said, I got a 1.75x speed increase by downgrading to 1.16.2. I'm stilling getting a very similar error about TensorRT. But with this downgrade I am unable to use the wd-vit-tagger-v3 model with the WD14 Tagger.

pip uninstall rembg
pip uninstall onnxruntime
pip install rembg[gpu] onnxruntime-gpu

bobcate avatar Jun 02 '24 00:06 bobcate