MobileSAM
MobileSAM copied to clipboard
Mobilesam Android deployment on NNAPI
Hi, Thanks for the great work you have done! I have tested all mobilesam features on Pytorch and successfully converted both image encoder and prompt encoder/Mask decoder parts to onnx format. I'm currently try to move the model to an Android app. The first approach I have tried is the Pytorch mobile way:
- tracing the whole model from image and prompt inputs to Masks output,
- optimizing for mobile and saving for Pytorch lite interpreter,
- Putting the .ptl file into the app and developing some few preprocessing and postprocessing operations
This gives me a fully functional mobilesam inference on mobile CPU. The problem Is the inference time that takes about 4/5 seconds only for encoding part. This Is the reason why I'm currently trying to make it work with mobile GPU support through NNAPI delegate but it's quite tricky. Pytorch mobile convert_to_nnapi tool fails to build a NNAPI compatible model, due to some unsopported Nodes e.g. (aten::add_) or (aten::gelu). I'm going to try the onnx way, by converting the two onnx models previously built into ORT files with NNAPI optimization and using onnxruntime on Android side.
I'm not sure if it's going to work as-is. Any kind of help, support or tip is kindly appreciated. Don't know if there's anybody working on the same thread.
Thanks in advance.
You can try ncnn or MNN. Can you share your android project source code?
Can you share the .ptl file
for Pytorch lite interpreter? I want to try this model out on my app.
Can you share the .ptl file for Pytorch lite interpreter?