MK
MK
Hi, I faced similar issue while experimenting with MediaPipe Pose model and after some debugging I think I found a clue. The thing is that the inference time of `_interpreter.run`...
@MichaelRinger have you tried with CPU/GPU acceleration? On my Poco F3 with Android, adding this code to the `testYolov8`: ``` final options = InterpreterOptions(); bool gpuAcc = true; if (Platform.isAndroid...
In the case of MediaPipe Pose Lite the problem looks a bit different. I ran benchmark with accelerations or not to test if model inference is longer than it should...
@codscino you mean 'inference time' as 'prediction time' in the @MichaelRinger example? That would be another clue that problem is only on Android. @MichaelRinger do you maybe have IPhone to...
@codscino oh, so the '900-1000ms inference time' was with Yolo model on iPhone right?
Ok, so it is not just Android problem