react-native-vision-camera icon indicating copy to clipboard operation
react-native-vision-camera copied to clipboard

Trying to integrate TFLite as a Frame Processor Plugin❓

Open iyinoluwamt opened this issue 2 years ago • 4 comments

Question

Any tips on how to do this and/or structure this project

What I tried

I've tried to go off the example project directory but I'm not sure how to implement TFLite in the ExampleProcessorPlugin code

VisionCamera Version

2.13.3

Additional information

iyinoluwamt avatar Jun 01 '22 15:06 iyinoluwamt

Hey, sorry I've never used TFLite before - what exactly are you struggling with? What did you try?

mrousavy avatar Jun 02 '22 09:06 mrousavy

Hi @iyinoluwamt,

you could convert your model into ONNX format and use: (ensure to pass your dummy input batch size for TF models as None to provide different batch sizes otherwise this will be a fixed value (tf2onnx does not provide dynamic axes like PyTorch)) react-native: https://www.npmjs.com/package/onnxruntime-react-native or native: https://onnxruntime.ai/docs/get-started/with-java.html java and objective-c bindings exists

felixdittrich92 avatar Jun 08 '22 08:06 felixdittrich92

Hey, sorry I've never used TFLite before - what exactly are you struggling with? What did you try?

These are from my Swift Frame Processor Plugin file. I tried to model it off of "https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/ios/README.md".

Run Inference babel config Picture1 FrameProcessor js

My issue is that this returns [] on all frames so I'm not sure why no inferences are being made (TFLite has a working Swift example). I can provide more if you need.

iyinoluwamt avatar Jun 08 '22 14:06 iyinoluwamt

Hi @iyinoluwamt,

you could convert your model into ONNX format and use: (ensure to pass your dummy input batch size for TF models as None to provide different batch sizes otherwise this will be a fixed value (tf2onnx does not provide dynamic axes like PyTorch)) react-native: https://www.npmjs.com/package/onnxruntime-react-native or native: https://onnxruntime.ai/docs/get-started/with-java.html java and objective-c bindings exists

Do you have any idea how I could get the frame by frame input for the ONNX model without using native code and just react-native libraries?

iyinoluwamt avatar Jun 08 '22 14:06 iyinoluwamt

See https://github.com/tensorflow/tfjs/issues/7773 :)

mrousavy avatar Jun 22 '23 13:06 mrousavy

Hey - I built a fast C++ / JSI / GPU-accelerated plugin just for this: https://github.com/mrousavy/react-native-fast-tflite 🥳

mrousavy avatar Sep 30 '23 09:09 mrousavy