mediapipe
mediapipe copied to clipboard
[Android] Selfie Segmentation slow inference, GPU not working
The GPU delegate doesn't seen to work on the Selfie Segmentation demo for Android, and I get very poor performance on CPU. I require good latency for use in AR applications. https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/image_segmentation/android
Using the Selfie-Segmenter model (selfie_segmenter.tflite) on Livestream mode and CPU delegate, I get inference times of 90+ ms on average. If the input image is downsized by 60%, I get 60+ ms, but this also gives unacceptably poor segmentation. The DeepLabsV3 model (deeplab_v3.tflite) performs worse at 200+ ms.
I have tried:
- Both category and confidence modes
- Front/Back camera
- Downsizing input bitmap/MPImage
- DeepLabsV3 and Selfie Segmenter models
- com.google.mediapipe:tasks-vision versions 0.10.14 and 0.20230731
I am using a Google Pixel 9 device (Android 15), running on the CPU delegate.
The problem appears to be with the GPU delegate not working on the Android app, forcing the use of the much slower CPU delegate. When I enable GPU delegate, inference completely fails or becomes extremely slow, or on certain devices, appears to crash.
The WEB version works perfectly on the same device with GPU delegate, with <3ms inference. With CPU, inference shoots up to 120+ ms (as expected). https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/image_segmentation/js
I have the same issue. When you enable GPU delegate, the category_mask sample app just crash.