Getting a whole bunch of errors when running on a 2070 in WSL
python3 convert_unet.py --ckpt_path ../../models/checkpoints/sd_xl_base_1.0_0.9vae.safetensors Total VRAM 8192 MB, total RAM 48178 MB xformers version: 0.0.23.post1 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 2070 : native VAE dtype: torch.float32 Using xformers cross attention model_type EPS adm 2816
detected baseline model version: SDXL Exporting sd_xl_base_1.0_0.9vae.safetensors to TensorRT [I] size & shape parameters: - batch size: min=1, opt=1, max=1 - height: min=768, opt=1024, max=1024 - width: min=768, opt=1024, max=1024 - token count: min=75, opt=75, max=150
/home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/../../comfy/ldm/modules/diffusionmodules/openaimodel.py:841: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert y.shape[0] == x.shape[0]
/home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/../../comfy/ldm/modules/diffusionmodules/openaimodel.py:122: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert x.shape[1] == self.channels
/home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/../../comfy/ldm/modules/attention.py:289: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if b * heads > 65535:
/home/wsluser/.local/lib/python3.10/site-packages/xformers/ops/fmha/common.py:178: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
self.query.shape == (B, Mq, K)
/home/wsluser/.local/lib/python3.10/site-packages/xformers/ops/fmha/common.py:179: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
and self.key.shape == (B, Mkv, K)
/home/wsluser/.local/lib/python3.10/site-packages/xformers/ops/fmha/common.py:180: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
and self.value.shape == (B, Mkv, Kv)
/home/wsluser/.local/lib/python3.10/site-packages/xformers/ops/fmha/common.py:277: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not cls.SUPPORTS_DIFFERENT_VALUE_EMBED and K != Kv:
/home/wsluser/.local/lib/python3.10/site-packages/xformers/ops/fmha/common.py:279: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if max(K, Kv) > cls.SUPPORTED_MAX_K:
/home/wsluser/.local/lib/python3.10/site-packages/xformers/ops/fmha/common.py:540: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if x.shape[-1] % alignment != 0:
ERROR:root:Exporting to ONNX failed. unsupported output type: int, from operator: xformers::efficient_attention_forward_cutlass
Building TensorRT engine… This can take a while.
Building TensorRT engine for /home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/comfy_trt/Unet-onnx/sd_xl_base_1.0_0.9vae.onnx: /home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/comfy_trt/Unet-trt/sd_xl_base_1.0_0.9vae_387ffbda0547d0a571e2d78607b1ccab.trt
Could not open file /home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/comfy_trt/Unet-onnx/sd_xl_base_1.0_0.9vae.onnx
Could not open file /home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/comfy_trt/Unet-onnx/sd_xl_base_1.0_0.9vae.onnx
[E] ModelImporter.cpp:733: Failed to parse ONNX model from file: /home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/comfy_trt/Unet-onnx/sd_xl_base_1.0_0.9vae.onnx
[!] Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?
Traceback (most recent call last):
File "/home/wsluser/ComfyUI/custom_nodes/comfy-trt-test/convert_unet.py", line 141, in
I've tried tensorrt v9 and v8. The dependencies are all there.
not sure wsl supported properly
can u try windows native ?