ort
ort copied to clipboard
Why should I be forced to have a CUDA or ROCm machine when wanting to run OpenVino on Intel?
This link tells me ort-inference supports OpenVino: https://github.com/pytorch/ort#-inference "ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™.
It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU."
However when I try to use it the dependencies point to the install of torch_ort, which needs CUDA as prerequisite. I don't have either ATI or NVIDIA on this Intel-PC, and want to use the Intel-GPU. What can I do to omit the CUDA-dependencies completely?
Yeah, I'm also trying to install ORT now, and it's giving me cuda errors, even though OpenVINO is specifically for Intel CPUs. Very weird process. It's also weird it says to install CUDA, even though pytorch handles installing cuda for you, so you'd think the first prereq would be to install pytorch.
Attempted to try OpenVINO using their ONNX container (https://hub.docker.com/r/openvino/onnxruntime_ep_ubuntu20)
On Intel CPU + Intel GPU:
python3 -m pip install torch-ort-infer
python3 -m torch_ort.configure
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.8/runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "/home/onnxruntimedev/.local/lib/python3.8/site-packages/torch_ort/__init__.py", line 6, in <module>
from onnxruntime.training.ortmodule import DebugOptions, LogLevel
ModuleNotFoundError: No module named 'onnxruntime.training'