Total VRAM 12288 MB, total RAM 65304 MB
pytorch version: 2.2.0+cu121
xformers version: 0.0.24
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
Using xformers cross attention
ASTERR config loaded successfully
Warn!: xFormers is available (Attention)
Warn!: Traceback (most recent call last):
File "D:\ComfyUI_Build\ComfyUI\nodes.py", line 1906, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack_init.py", line 47, in
module = importlib.import_module(f".{nodes_filename}", package=name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\nodes.py", line 76, in
from Unique3D.scripts.mesh_init import fast_geo
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\scripts\mesh_init.py", line 6, in
from .utils import meshlab_mesh_to_py3dmesh, py3dmesh_to_meshlab_mesh
File "D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack\Gen_3D_Modules\Unique3D\scripts\utils.py", line 25, in
session = new_session(providers=providers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_Build\python_embeded\Lib\site-packages\rembg\session_factory.py", line 44, in new_session
return session_class(model_name, sess_opts, providers, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_Build\python_embeded\Lib\site-packages\rembg\sessions\base.py", line 34, in init
self.inner_session = ort.InferenceSession(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 396, in init
raise e
File "C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\Administrator\AppData\Roaming\Python\Python311\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 415, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Warn!: Cannot import D:\ComfyUI_Build\ComfyUI\custom_nodes\ComfyUI-3D-Pack module for custom nodes: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)