android-demo-app
android-demo-app copied to clipboard
Getting Lite Interpreter
Hello thank you for the nice work you provided.
I am currently working on a project that uses yolo on android device...
I was so happy to find out these examples but somehow, it doesn't work on my environment.
Since I am new to android, even though I have experiences in PyTorch, it is hard to fix the code..
I am keep getting the error starts with
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.demo.objectdetection/org.pytorch.demo.objectdetection.MainActivity}: com.facebook.jni.CppException: Lite Interpreter verson number does not match. The model version must be between 3 and 5But the model version is 7 () Exception raised from parseMethods at ../torch/csrc/jit/mobile/import.cpp:320 (most recent call first):
I can guess that this error is from LiteModuleLoader.. but have no idea how to fix it and the meaning of the interpreter version. I would be glad if I get an answer thanks! :)
I have the same issue. I follow the steps exactly, use their own model and get this error.
Any fix/workarounds found?
@Treshank No I am making this issue just to let author know this codes are not working properly...
Same issue also in TorchVideo as well.
I am getting the same issue as well.
@Treshank @andreanne-lemay Okay... Let's see if author is still working on this git repository...!
I've tried resetting my local repo to this commit - cd35a009ba964331abccd30f6fa0614224105d39 as suggested but it doesn't exist (as far as I can see).
@Michael97, i think they mean in model making, try setting the yolov5 repo to that, if you are using a custom model. I guess it doesn't apply to you otherwise
Ah yeah, that makes sense. I'm using the provided one right now, well at least trying to use it.
I tried it @Michael97, no luck..
The last git version that is sorta working for me is #141. It uses older PyTorch, still yet to test with custom model.
I can't seem to be able to use my trained model with #141. If anyone has been able use/train a model, some instructions would be great
Any news on this issue? @Treshank were you able to fix this?
@Treshank @stefan-falk I was able to run my custom model (classification densenet) without the version number error by reverting to the commit @Treshank indicated (#141) on the HelloWorld demo. This also implicates going back to
implementation 'org.pytorch:pytorch_android:1.8.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.8.0'
@andreanne-lemay thanks!
I didn't try this with a pytorch model though. I was using a Tensorflow model (tflite).
implementation 'org.tensorflow:tensorflow-lite-support:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-metadata:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT'
But I guess I'll give pytorch a try then. 👍
@andreanne-lemay, what version of pytorch did you use to make the object detection model?
@Treshank I used pytorch 1.10.0 and the following lines to convert my model: https://github.com/andreanne-lemay/cervical_mobile_app/blob/main/mobile_model_conversion.py
@andreanne-lemay Thanks!! your solution works! Didnt have to use the converter tho, used the standard export.py with torchscript include option. #141 version uses .pt not .plt keep that in mind Also for reference im using the object detection app, and a yolov5m custom model
I can run the speech recognition example now with:
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
Thanks @andreanne-lemay for pointing me there 👍
yes @stefan-falk. Your solution is also working. Using the latest master branch, and simply changing in build.gradle
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
to
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
works.
I think I already tried that and it didn't work for some (probably other) reason. But never mind as long as it works now :)
The torch.jit.mobile
has a _backport_for_mobile
function to "backport" a model to a given version
from torch.jit.mobile import (
_backport_for_mobile,
_get_model_bytecode_version,
)
MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"
print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))
_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)
print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))
The
torch.jit.mobile
has a_backport_for_mobile
function to "backport" a model to a given versionfrom torch.jit.mobile import ( _backport_for_mobile, _get_model_bytecode_version, ) MODEL_INPUT_FILE = "model_v7.ptl" MODEL_OUTPUT_FILE = "model_v5.ptl" print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE)) _backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5) print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))
hi raedle, firstly, it works for me thank you for this lifesaving post. however, there is small missing part: this method increase the size of the model. Version-7 was 34 MB, this one is 68 MB. It doubled the size.
Is there any solution that we can apply without increasing the size of the model?
The
torch.jit.mobile
has a_backport_for_mobile
function to "backport" a model to a given versionfrom torch.jit.mobile import ( _backport_for_mobile, _get_model_bytecode_version, ) MODEL_INPUT_FILE = "model_v7.ptl" MODEL_OUTPUT_FILE = "model_v5.ptl" print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE)) _backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5) print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))
It seems to be working, but I suppose it is better to change build.gradle to
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
Even I faced the similar issue, Moving to torch version 1.11.0 resolved the issue for me !pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
@MaratZakirov I'm getting this error on doing the backport
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
from torch.jit.mobile import (
_backport_for_mobile,
_get_model_bytecode_version,
)
model = torch.hub.load('pytorch/vision:v0.10.0', 'deeplabv3_mobilenet_v3_large', pretrained=True)
model.eval()
scripted_module = torch.jit.script(model)
# Export full jit version model (not compatible mobile interpreter), leave it here for comparison
# scripted_module._save_for_lite_interpreter("deeplabv3_scripted.pt")
# Export mobile interpreter version model (compatible with mobile interpreter)
optimized_scripted_module = optimize_for_mobile(scripted_module)
optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")
MODEL_INPUT_FILE = "deeplabv3_scripted.ptl"
MODEL_OUTPUT_FILE = "deeplabv5_scripted.ptl"
print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))
_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)
print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))
when i try to run ImageSegmentation demo, i get "model version must be between 3 and 7 but the model version is 8 ()" after a few search, i realized this may cause by the version diffience between pytorch optimizing model and the version in build.gradle, then i use the last version of 'org.pytorch:pytorch_android_lite' which is 1.12.2, and the problem gone. the last version use on mobile can be found at: https://search.maven.org/artifact/org.pytorch/pytorch_android_lite and pytorch on computor for optimizing model at: https://github.com/pytorch/vision/releases
Don't know if this case still open, but I guess this will fix it:
Make sure you use in the android build.gradle
implementation 'org.pytorch:pytorch_android_lite:1.12.2' implementation 'org.pytorch:pytorch_android_torchvision_lite:1.12.2'
or any other higher pytorch library version on the android side.
@JosephKKim Hello. Were you able to fix the issue? If yes, can you please share your experience what did you modify. Thanks a lot in advance!
Sharing what I experienced..
I've trained a Model using YOLOv5s with latest torch version, and export it to torchscript -- probably made as version 8 (in my case).
To fix the error which @JosephKKim initially presented (which I had as well) -- I just changed the library on the build.gradle used to : "implementation 'org.pytorch:pytorch_android_lite:1.12.2'" "implementation 'org.pytorch:pytorch_android_torchvision_lite:1.12.2'" Android ObjectDetection did run and identify the Model (after that change), however -- detection and predictions are with very low percentage, if at all.
I used the same Model on the iOS ObjectDetection -- https://github.com/pytorch/ios-demo-app/tree/master/ObjectDetection -- it ran flawlessly.. I had to change the code of the Podfile to use: "pod 'LibTorch-Lite', '~>1.13.0'"
So, I guess, the Model was trained and exported properly. But the android libraries are out-of-date? don't know exactly, still checking that.