react-native-vision-camera
react-native-vision-camera copied to clipboard
Trying to integrate TFLite as a Frame Processor Plugin❓
Question
Any tips on how to do this and/or structure this project
What I tried
I've tried to go off the example project directory but I'm not sure how to implement TFLite in the ExampleProcessorPlugin code
VisionCamera Version
2.13.3
Additional information
- [ ] I am using Expo
- [X] I have read the Troubleshooting Guide
- [X] I agree to follow this project's Code of Conduct
- [X] I searched for similar questions in the issues page as well as in the discussions page and found none.
Hey, sorry I've never used TFLite before - what exactly are you struggling with? What did you try?
Hi @iyinoluwamt,
you could convert your model into ONNX format and use: (ensure to pass your dummy input batch size for TF models as None to provide different batch sizes otherwise this will be a fixed value (tf2onnx does not provide dynamic axes like PyTorch)) react-native: https://www.npmjs.com/package/onnxruntime-react-native or native: https://onnxruntime.ai/docs/get-started/with-java.html java and objective-c bindings exists
Hey, sorry I've never used TFLite before - what exactly are you struggling with? What did you try?
These are from my Swift Frame Processor Plugin file. I tried to model it off of "https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/ios/README.md".





My issue is that this returns []
on all frames so I'm not sure why no inferences are being made (TFLite has a working Swift example). I can provide more if you need.
Hi @iyinoluwamt,
you could convert your model into ONNX format and use: (ensure to pass your dummy input batch size for TF models as None to provide different batch sizes otherwise this will be a fixed value (tf2onnx does not provide dynamic axes like PyTorch)) react-native: https://www.npmjs.com/package/onnxruntime-react-native or native: https://onnxruntime.ai/docs/get-started/with-java.html java and objective-c bindings exists
Do you have any idea how I could get the frame by frame input for the ONNX model without using native code and just react-native libraries?
See https://github.com/tensorflow/tfjs/issues/7773 :)
Hey - I built a fast C++ / JSI / GPU-accelerated plugin just for this: https://github.com/mrousavy/react-native-fast-tflite 🥳