sahi
sahi copied to clipboard
ONNX runtime inference compatibility
ONNX models compatibility to speed up CPU inference in object detection
Hey @maikimati, I have some thoughts regarding this implementation.
- Perhaps it would be wise to create a separate function for the image preprocessing in the case I would like to overwrite it.
- The current resizing doesn't maintain aspect ratio, but the slices needn't be squares. The implementation in YOLOv8 first resizes the longest side and then pads the remaining space. There's also one pull request open for OpenVINO implementation (#896) with the same resizing scheme.
- It would be nice, if you could provide the
load_modelfunction a dictionary of options to set up the inference session, including alternative execution provider such as OpenVINO.