ONNX: Failed to convert model / ONNX opset version 14 is not supported
🐛 Describe the bug
Hello, since a while i am trying to get Stable Diffusion running on my RX 7900 XTX. Finally it is working normal when generating with a normal model that is not opitmized. Now i wanted to try out onnx for optimizing the models for my GPU. For 4 minutes or so my 32GB RAM is full and my CPU (R7 7800X3D) utilization is also between 60% and 100%. As i read, that is normal. But then i get the following error message:
'''ONNX: Failed to convert model: model='sd_xl_base_1.0.safetensors', error=Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues. ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline *** Error completing request *** Arguments: ('task(2vhzwlojaeegnc8)', 'flower', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001C4F60E7A00>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {} Traceback (most recent call last): File "C:\Users\Niklas\pinokio\api\automatic1111.git\app\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "C:\Users\Niklas\pinokio\api\automatic1111.git\app\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "C:\Users\Niklas\pinokio\api\automatic1111.git\app\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "C:\Users\Niklas\pinokio\api\automatic1111.git\app\modules\processing.py", line 736, in process_images res = process_images_inner(p) File "C:\Users\Niklas\pinokio\api\automatic1111.git\app\modules\processing.py", line 841, in process_images_inner result = shared.sd_model(**kwargs) TypeError: 'OnnxRawPipeline' object is not callable
---'''
I already tried to downgrad torch to 2.0.0 but then i get a compatibility error for torch-directml. Could someone please help me out ?
Thanks
Versions
Collecting environment information... PyTorch version: 2.0.0+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A
Python version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture=9 CurrentClockSpeed=4201 DeviceID=CPU0 Family=107 L2CacheSize=8192 L2CacheSpeed= Manufacturer=AuthenticAMD MaxClockSpeed=4201 Name=AMD Ryzen 7 7800X3D 8-Core Processor ProcessorType=3 Revision=24834
Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] onnx==1.15.0 [pip3] onnxruntime==1.17.0 [pip3] onnxruntime-directml==1.17.0 [pip3] open-clip-torch==2.20.0 [pip3] pytorch-lightning==1.9.4 [pip3] torch==2.0.0 [pip3] torch-directml==0.2.0.dev230426 [pip3] torchdiffeq==0.2.3 [pip3] torchmetrics==0.10.3 [pip3] torchsde==0.2.6 [pip3] torchvision==0.15.1 [conda] Could not collect