anomalib icon indicating copy to clipboard operation
anomalib copied to clipboard

[WIP] ๐Ÿฆ€ Standalone Rust Inference

Open ashwinvaidya17 opened this issue 9 months ago โ€ข 3 comments

๐Ÿ“ Description

  • Requires a lot of cleanup but here is the scaffolding.
  • Produces simple heat map overlay image

โœจ Changes

Select what type of change your PR is:

  • [ ] ๐Ÿž Bug fix (non-breaking change which fixes an issue)
  • [ ] ๐Ÿ”จ Refactor (non-breaking change which refactors the code base)
  • [x] ๐Ÿš€ New feature (non-breaking change which adds functionality)
  • [ ] ๐Ÿ’ฅ Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] ๐Ÿ“š Documentation update
  • [ ] ๐Ÿ”’ Security update

โœ… Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • [ ] ๐Ÿ“‹ I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
  • [ ] ๐Ÿ“š I have made the necessary updates to the documentation (if applicable).
  • [ ] ๐Ÿงช I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).

For more information about code review checklists, see the Code Review Checklist.

ashwinvaidya17 avatar Apr 26 '24 13:04 ashwinvaidya17

Is it fast?

alexriedel1 avatar Apr 29 '24 06:04 alexriedel1

Is it fast?

I haven't benchmarked this yet but I can say that this isn't the fastest. It uses the input image to set model input tensor info. But I feel we can make it more efficient by using Rust's native async feature to separate data fetching and model inference. Especially given that OpenVINO also supports async inference. This is more of a PoC for now. Not sure if we will go with Rust or C++ for standalone inference.

ashwinvaidya17 avatar Apr 29 '24 07:04 ashwinvaidya17

Is it fast?

I haven't benchmarked this yet but I can say that this isn't the fastest. It uses the input image to set model input tensor info. But I feel we can make it more efficient by using Rust's native async feature to separate data fetching and model inference. Especially given that OpenVINO also supports async inference. This is more of a PoC for now. Not sure if we will go with Rust or C++ for standalone inference.

Alright! Thanks for the info :)

alexriedel1 avatar Apr 29 '24 07:04 alexriedel1