dnn_benchmark
dnn_benchmark copied to clipboard
Application to test inference frameworks for Android
Add int8 quantised model and compare their accuracy + speed on different frameworks. Compare baseline converted models with quantisation-aware fine tuning (eg in MNN)
Try to adopt gpu compatibility logic from TFLite to other frameworks https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/acceleration/compatibility
- [x] MNN - [x] NCNN - [ ] ONNX Runtime - [ ] OpenCV?
- [ ] new base docker image with necessary ndk and cmake versions - [ ] auto-increment versionCode on CI - [ ] update fastlane version - [ ] make...
* explore possibility to use ncnn model as input
Issue: apk size is large + most of space is redundant because we store the same weights in slightly different formats. Suggestion: Keep only lightweight model definitions inside apk. Because...