sahi icon indicating copy to clipboard operation
sahi copied to clipboard

ONNX runtime inference compatibility

Open maikimati opened this issue 2 years ago • 1 comments

ONNX models compatibility to speed up CPU inference in object detection

maikimati avatar Jun 23 '23 17:06 maikimati

Hey @maikimati, I have some thoughts regarding this implementation.

  • Perhaps it would be wise to create a separate function for the image preprocessing in the case I would like to overwrite it.
  • The current resizing doesn't maintain aspect ratio, but the slices needn't be squares. The implementation in YOLOv8 first resizes the longest side and then pads the remaining space. There's also one pull request open for OpenVINO implementation (#896) with the same resizing scheme.
  • It would be nice, if you could provide the load_model function a dictionary of options to set up the inference session, including alternative execution provider such as OpenVINO.

karl-joan avatar Jul 14 '23 11:07 karl-joan