AMD-GPU Forge ONNX Error on startup
AMD-Gpu Forge webui starts successfully, but reports the following error with ONXX:
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute 'ORTPipelinePart'
Full Output up to that point:
venv "F:\stable-diffusion-webui-amdgpu-forge\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-1.10.1
Commit hash: ef7f1aff1005f4d166c7af3ec5c05c40fe47feab
ROCm: agents=['gfx1100']
ROCm: version=6.2, using agent gfx1100
ZLUDA support: experimental
ZLUDA load: path='F:\stable-diffusion-webui-amdgpu-forge\.zluda' nightly=False
Launching Web UI with arguments: --zluda --theme dark
Total VRAM 24560 MB, total RAM 65462 MB
pytorch version: 2.7.0+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon RX 7900 XTX [ZLUDA] : native
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
W0729 17:08:51.137399 16016 venv\Lib\site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
Using pytorch cross attention
Using pytorch attention for VAE
ONNX failed to initialize: module 'optimum.onnxruntime.modeling_diffusion' has no attribute 'ORTPipelinePart'
Wondering if this is the cause of the horrible performance and Memory leaks / Out of Memory issues with your webui? Please let me know, thanks!
Hey you have to add --skip-ort to the launch args. Then delete the venv folder and relaunch the webui-user.bat
I'm not that user, but what if I wanted to use ONNX? I'm having the same error as him on startup. I've tried a variety of things, including multiple downgrades of optimum and onnxruntime, but as far as I can tell no build of Optimum includes this "ORTPipelinePart" string.
After a TON of searching, I've finally discovered that ORTPipelinePart is a class from optimum v1.23.0 (https://github.com/huggingface/optimum/pull/2021) through v1.25.3 and was eventually removed (https://github.com/huggingface/optimum/pull/2234). The optimum.onnxruntime code was later moved to (huggingface/optimum-onnx), but there's no longer any ORTPipelinePart to interact with. Removing that line of code will at least allow ONNX to initialize with the CPUExecutionProvider. Still have to look into getting it to find ROCmExecutionProvider, though...
Well now ONNX Runtime in the latest release is saying to stop using ROCm and use Migraphx or Vitis AI instead... goodie... Seems like everyone else wants to stick with "ROCm" so that'll make for even more headaches. ☹️
I guess AMD is following the Microsoft school of naming and throwing out the name/system everyone is used to and going with something else. 🙄
Looks like ROCm/Migraphx doesn't detect my RX 9070 XT via onnxruntime/onnxruntime-rocm in python 3.10, but it found it with a 3.12 install.
Workaround to this and the new error: No model named "optimum.onnxruntime"
Make sure --skip-ort is in the webui-user.bat.
Go inside the stable-diffusion-webui-amdgpu-forge folder.
Then click in the File Explorer bar (not searchbar) and type cmd then press enter.
Then you copy and paste these commands one by one:
venv\Scripts\activate.bat
pip install optimum[onnxruntime]
Then close the cmd and relaunch the webui-user.bat