onnxruntime
onnxruntime copied to clipboard
How to build for multiple execution provider?
Describe the bug
I am trying to build onnxruntime for tensorrt and openvino together. I want to select at run time which EP to run. However I am receiving the following error-
/opt/onnxruntime/onnxruntime/core/providers/shared_library/provider_interfaces.h:8:10: fatal error: cuda_runtime.h: No such file or directory #include <cuda_runtime.h>
Though I am able to successfully build both of the EP separately.
Urgency None
System information
- OS Platform and Distribution: Ubuntu 18.04
- ONNX Runtime installed from: source
- ONNX Runtime version: 1.7.2
- Python version: 3.6.9
- GCC/Compiler version (if compiling from source): 7.5.0
- CUDA/cuDNN version: 11.0
- GPU model and memory: Quadro RTX 4000 (8GB)
- Tensorrt version- 7.1.3.4
- Openvino version- 2021.2
To Reproduce
- Install the tensorrt version as mentioned above
- Install the openvino version as mentioned above
- source the setupvars.sh in openvino
- Clone the onnxruntime version as mentioned above
- Use the following command inside onnxruntime directory to build it-
./build.sh --cudnn_home /usr/lib/x86_64-linux-gnu --cuda_home /usr/local/cuda --use_tensorrt --tensorrt_home /opt/tensorrt --use_openvino CPU_FP32 --build_shared_lib --config Release --update --parallel --build --test --skip_submodule_sync
Expected behavior I should have both of the EPs built together using the single build command.
Not sure if someone has ever tried building the TensorRT EP and OpenVINO EP into a single build. Tagging @jywu-msft to see if he has some thoughts on this matter.
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
I met same problem. I installed onnxruntime using pip install onnxruntime-gpu
.
Then I installed ort openvino using pip install onnxruntime-openvino
Can I build them together and change hardward acceleration by choicing different provider?
On Windows is it possible to build with both DirectML and OpenVino? Also with OpenVino CPU_FP32, GPU_FP32 and GPU_FP16 needs to be present.
Any ideas or suggestions on this?
very good issue. That's the problem that I am facing now. I want all EPs together.
The same on my side: if you want to do inference on GPU you need to handle both NVIDIA and Intel Xe GPU efficiently you need to have both DirectML and OpenVINO. But if you install the latter you no longer can select the former.
Same here, I would like to build with CUDA, ROCm, and DirectML support.