diffusers
diffusers copied to clipboard
[ONNX] Support for nightly versions of the directml package
Self explanatory title. The latest nightly version has improved my inference speeds considerably and I know others who use it as well but currently it fails to import because import_utils.py does not list "ort_nightly_directml" as a onnx candidate.
I tested the reported issue and was unable to get an error where import_utils.py gave an error that directml was not available when using provider="DmlExecutionProvider"
.
- pip install ort_nightly_directml will result in an old version (Mar 20, 2022) of ONNX Runtime being installed. This will result in errors when attempting to generate an image (Not related to import_utils.py).
- Installing from the current repo doesn't result in any errors, other than the incorrect version of protobuf being installed (Correct with:
pip install protobuf==3.20.1
)
What exact error are you getting?
I uninstalled all onnx related packages I had then installed the latest ort nightly directml package which I grabbed from here: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/1.14.0.dev20221114003
Was also getting an incorrect version of protobuf but I got that sorted.
Then the error I got was: "NameError: name 'ort' is not defined. Did you mean: 'oct'?"
and it got immediately fixed when I added "ort_nightly_directml" to the 'candidates' list in _onnx_available within import_utils.py.
Please share your system info with us. You can run the command diffusers-cli env
and copy-paste its output below.
- `diffusers` version: 0.8.0.dev0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.6
- PyTorch version (GPU?): 1.12.1+cpu (False)
- Huggingface_hub version: 0.10.1
- Transformers version: 4.22.1
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
cc @anton-l (maybe cc @mfuntowicz as well)
So I removed "ort_nightly_directml" that I've added to import_utils.py to check if I would still get that error - and I did, here is the full traceback:
Traceback (most recent call last):
File "C:\SD\diffusers\LuxionUI.py", line 576, in <module>
updatePipe(default_device)
File "C:\SD\diffusers\LuxionUI.py", line 490, in updatePipe
pipe = OnnxStableDiffusionLongPromptWeightingPipeline.from_pretrained("./models/" + model_name, provider="DmlExecutionProvider")
File "C:\SD\diffusers\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 629, in from_pretrained
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "C:\SD\diffusers\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 206, in from_pretrained
return cls._from_pretrained(
File "C:\SD\diffusers\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 173, in _from_pretrained
model = OnnxRuntimeModel.load_model(
File "C:\SD\diffusers\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 78, in load_model
return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)
NameError: name 'ort' is not defined. Did you mean: 'oct'?
Some clarification that might be useful: As mentioned, I did not have onnx, onnxruntime or any other onnx packages before I installed this wheel: ort_nightly_directml-1.14.0.dev20221114003-cp310-cp310-win_amd64.whl
grabbed from the link mentioned above. The model I'm trying to load is a working onnx converted model that I've locally and everything works fine once I add "ort_nightly_directml" to import_utils.py.
Gentle ping @anton-l
Hi @GreenLandisaLie! Seems that onnxruntime
is still not visible in your environment after installing ort_nightly_directml
. Could you confirm that by running the following two snippets?
import pkgutil
for module in pkgutil.iter_modules():
if "onnx" in module.name:
print(module.name)
import onnxruntime as ort
print(ort.__version__)
Hello @anton-l The first returned:
onnx
onnxruntime
(onnx shows because I had to 'pip install onnx' to be able to run the convert script)
The second returns:
1.14.0
So it seems your snippets detects onnxruntime just fine. So I tried this one with code I just copied from import_utils.py:
import importlib.util
import operator as op
import os
import sys
from collections import OrderedDict
from typing import Union
from packaging import version
from packaging.version import Version, parse
if sys.version_info < (3, 8):
import importlib_metadata
else:
import importlib.metadata as importlib_metadata
_onnxruntime_version = "N/A"
_onnx_available = importlib.util.find_spec("onnxruntime") is not None
if _onnx_available:
candidates = ("onnxruntime", "onnxruntime-gpu", "onnxruntime-directml", "onnxruntime-openvino")
_onnxruntime_version = None
# For the metadata, we have to look for both onnxruntime and onnxruntime-gpu
for pkg in candidates:
try:
_onnxruntime_version = importlib_metadata.version(pkg)
break
except importlib_metadata.PackageNotFoundError:
pass
_onnx_available = _onnxruntime_version is not None
if _onnx_available:
print("_onnx_available")
print(_onnx_available)
print(_onnxruntime_version)
The result:
False
None
But when I added , "ort_nightly_directml"
to candidates:
_onnx_available
True
1.14.0.dev20221114003
@GreenLandisaLie ah, now I see where it's coming from, thank you for investigating! Didn't expect the nightly package to have a custom name. Adding it to the list!
Thank you for looking into this and sorry for wasting your time on something that so far it seems I've been the only one affected with. Feel free to close this issue whenever you want.