yolov5-onnxruntime icon indicating copy to clipboard operation
yolov5-onnxruntime copied to clipboard

YOLOv5 ONNX Runtime C++ inference code.

yolov5-onnxruntime

C++ YOLO v5 ONNX Runtime inference code for object detection.

Dependecies:

  • OpenCV 4.x
  • ONNXRuntime 1.7+
  • OS: Tested on Windows 10 and Ubuntu 20.04
  • CUDA 11+ [Optional]

Build

To build the project you should run the following commands, don't forget to change ONNXRUNTIME_DIR cmake option:

mkdir build
cd build
cmake .. -DONNXRUNTIME_DIR=path_to_onnxruntime -DCMAKE_BUILD_TYPE=Release
cmake --build .

Run

Before running the executable you should convert your PyTorch model to ONNX if you haven't done it yet. Check the official tutorial.

On Windows: to run the executable you should add OpenCV and ONNX Runtime libraries to your environment path or put all needed libraries near the executable (onnxruntime.dll and opencv_world.dll).

Run from CLI:

./yolo_ort --model_path yolov5.onnx --image bus.jpg --class_names coco.names --gpu
# On Windows ./yolo_ort.exe with arguments as above

Demo

YOLOv5m onnx:

References

  • YOLO v5 repo: https://github.com/ultralytics/yolov5
  • YOLOv5 Runtime Stack repo: https://github.com/zhiqwang/yolov5-rt-stack
  • ONNXRuntime Inference examples: https://github.com/microsoft/onnxruntime-inference-examples