onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

How to build for multiple execution provider?

Open shauryagoel opened this issue 3 years ago • 4 comments

Describe the bug I am trying to build onnxruntime for tensorrt and openvino together. I want to select at run time which EP to run. However I am receiving the following error- /opt/onnxruntime/onnxruntime/core/providers/shared_library/provider_interfaces.h:8:10: fatal error: cuda_runtime.h: No such file or directory #include <cuda_runtime.h>
Though I am able to successfully build both of the EP separately.

Urgency None

System information

  • OS Platform and Distribution: Ubuntu 18.04
  • ONNX Runtime installed from: source
  • ONNX Runtime version: 1.7.2
  • Python version: 3.6.9
  • GCC/Compiler version (if compiling from source): 7.5.0
  • CUDA/cuDNN version: 11.0
  • GPU model and memory: Quadro RTX 4000 (8GB)
  • Tensorrt version- 7.1.3.4
  • Openvino version- 2021.2

To Reproduce

  • Install the tensorrt version as mentioned above
  • Install the openvino version as mentioned above
  • source the setupvars.sh in openvino
  • Clone the onnxruntime version as mentioned above
  • Use the following command inside onnxruntime directory to build it- ./build.sh --cudnn_home /usr/lib/x86_64-linux-gnu --cuda_home /usr/local/cuda --use_tensorrt --tensorrt_home /opt/tensorrt --use_openvino CPU_FP32 --build_shared_lib --config Release --update --parallel --build --test --skip_submodule_sync

Expected behavior I should have both of the EPs built together using the single build command.

shauryagoel avatar Nov 14 '21 03:11 shauryagoel

Not sure if someone has ever tried building the TensorRT EP and OpenVINO EP into a single build. Tagging @jywu-msft to see if he has some thoughts on this matter.

hariharans29 avatar Nov 15 '21 23:11 hariharans29

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

stale[bot] avatar Apr 17 '22 10:04 stale[bot]

I met same problem. I installed onnxruntime using pip install onnxruntime-gpu.

image

Then I installed ort openvino using pip install onnxruntime-openvino

image

Can I build them together and change hardward acceleration by choicing different provider?

wanduoz avatar Jun 13 '22 09:06 wanduoz

On Windows is it possible to build with both DirectML and OpenVino? Also with OpenVino CPU_FP32, GPU_FP32 and GPU_FP16 needs to be present.

Any ideas or suggestions on this?

venki-thiyag avatar Sep 15 '22 17:09 venki-thiyag

very good issue. That's the problem that I am facing now. I want all EPs together.

2catycm avatar Aug 21 '23 06:08 2catycm

The same on my side: if you want to do inference on GPU you need to handle both NVIDIA and Intel Xe GPU efficiently you need to have both DirectML and OpenVINO. But if you install the latter you no longer can select the former.

Fafa87 avatar Sep 15 '23 09:09 Fafa87

Same here, I would like to build with CUDA, ROCm, and DirectML support.

amblamps avatar Jul 04 '24 08:07 amblamps