android-demo-app icon indicating copy to clipboard operation
android-demo-app copied to clipboard

Getting Lite Interpreter

Open JosephKKim opened this issue 3 years ago • 35 comments

Hello thank you for the nice work you provided. I am currently working on a project that uses yolo on android device... I was so happy to find out these examples but somehow, it doesn't work on my environment. Since I am new to android, even though I have experiences in PyTorch, it is hard to fix the code.. I am keep getting the error starts with java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.demo.objectdetection/org.pytorch.demo.objectdetection.MainActivity}: com.facebook.jni.CppException: Lite Interpreter verson number does not match. The model version must be between 3 and 5But the model version is 7 () Exception raised from parseMethods at ../torch/csrc/jit/mobile/import.cpp:320 (most recent call first):

I can guess that this error is from LiteModuleLoader.. but have no idea how to fix it and the meaning of the interpreter version. I would be glad if I get an answer thanks! :)

JosephKKim avatar Nov 05 '21 09:11 JosephKKim

I have the same issue. I follow the steps exactly, use their own model and get this error.

Michael97 avatar Nov 08 '21 22:11 Michael97

Any fix/workarounds found?

Treshank avatar Nov 10 '21 13:11 Treshank

@Treshank No I am making this issue just to let author know this codes are not working properly...

JosephKKim avatar Nov 10 '21 13:11 JosephKKim

Same issue also in TorchVideo as well.

Treshank avatar Nov 10 '21 14:11 Treshank

I am getting the same issue as well.

andreanne-lemay avatar Nov 11 '21 04:11 andreanne-lemay

@Treshank @andreanne-lemay Okay... Let's see if author is still working on this git repository...!

JosephKKim avatar Nov 11 '21 07:11 JosephKKim

I've tried resetting my local repo to this commit - cd35a009ba964331abccd30f6fa0614224105d39 as suggested but it doesn't exist (as far as I can see).

Michael97 avatar Nov 11 '21 12:11 Michael97

@Michael97, i think they mean in model making, try setting the yolov5 repo to that, if you are using a custom model. I guess it doesn't apply to you otherwise

Treshank avatar Nov 11 '21 14:11 Treshank

Ah yeah, that makes sense. I'm using the provided one right now, well at least trying to use it.

Michael97 avatar Nov 11 '21 14:11 Michael97

I tried it @Michael97, no luck..

Treshank avatar Nov 12 '21 06:11 Treshank

The last git version that is sorta working for me is #141. It uses older PyTorch, still yet to test with custom model.

Treshank avatar Nov 12 '21 07:11 Treshank

I can't seem to be able to use my trained model with #141. If anyone has been able use/train a model, some instructions would be great

Treshank avatar Nov 12 '21 08:11 Treshank

Any news on this issue? @Treshank were you able to fix this?

stefan-falk avatar Nov 17 '21 13:11 stefan-falk

@Treshank @stefan-falk I was able to run my custom model (classification densenet) without the version number error by reverting to the commit @Treshank indicated (#141) on the HelloWorld demo. This also implicates going back to

    implementation 'org.pytorch:pytorch_android:1.8.0'
    implementation 'org.pytorch:pytorch_android_torchvision:1.8.0'

andreanne-lemay avatar Nov 17 '21 14:11 andreanne-lemay

@andreanne-lemay thanks!

I didn't try this with a pytorch model though. I was using a Tensorflow model (tflite).

implementation 'org.tensorflow:tensorflow-lite-support:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-metadata:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT'

But I guess I'll give pytorch a try then. 👍

stefan-falk avatar Nov 17 '21 14:11 stefan-falk

@andreanne-lemay, what version of pytorch did you use to make the object detection model?

Treshank avatar Nov 17 '21 15:11 Treshank

@Treshank I used pytorch 1.10.0 and the following lines to convert my model: https://github.com/andreanne-lemay/cervical_mobile_app/blob/main/mobile_model_conversion.py

andreanne-lemay avatar Nov 17 '21 15:11 andreanne-lemay

@andreanne-lemay Thanks!! your solution works! Didnt have to use the converter tho, used the standard export.py with torchscript include option. #141 version uses .pt not .plt keep that in mind Also for reference im using the object detection app, and a yolov5m custom model

Treshank avatar Nov 18 '21 07:11 Treshank

I can run the speech recognition example now with:

implementation 'org.pytorch:pytorch_android_lite:1.10.0'

Thanks @andreanne-lemay for pointing me there 👍

stefan-falk avatar Nov 18 '21 08:11 stefan-falk

yes @stefan-falk. Your solution is also working. Using the latest master branch, and simply changing in build.gradle implementation 'org.pytorch:pytorch_android_lite:1.9.0' to implementation 'org.pytorch:pytorch_android_lite:1.10.0' works.

Treshank avatar Nov 18 '21 09:11 Treshank

I think I already tried that and it didn't work for some (probably other) reason. But never mind as long as it works now :)

stefan-falk avatar Nov 18 '21 09:11 stefan-falk

The torch.jit.mobile has a _backport_for_mobile function to "backport" a model to a given version

from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

raedle avatar Feb 16 '22 17:02 raedle

The torch.jit.mobile has a _backport_for_mobile function to "backport" a model to a given version

from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

hi raedle, firstly, it works for me thank you for this lifesaving post. however, there is small missing part: this method increase the size of the model. Version-7 was 34 MB, this one is 68 MB. It doubled the size.

Is there any solution that we can apply without increasing the size of the model?

celikmustafa89 avatar Mar 04 '22 15:03 celikmustafa89

The torch.jit.mobile has a _backport_for_mobile function to "backport" a model to a given version

from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

It seems to be working, but I suppose it is better to change build.gradle to

implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'

MaratZakirov avatar Apr 07 '22 15:04 MaratZakirov

Even I faced the similar issue, Moving to torch version 1.11.0 resolved the issue for me !pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

vemusharan avatar Aug 16 '22 16:08 vemusharan

@MaratZakirov I'm getting this error on doing the backport

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

model = torch.hub.load('pytorch/vision:v0.10.0', 'deeplabv3_mobilenet_v3_large', pretrained=True)
model.eval()

scripted_module = torch.jit.script(model)
# Export full jit version model (not compatible mobile interpreter), leave it here for comparison
# scripted_module._save_for_lite_interpreter("deeplabv3_scripted.pt")
# Export mobile interpreter version model (compatible with mobile interpreter)
optimized_scripted_module = optimize_for_mobile(scripted_module)
optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")


    
MODEL_INPUT_FILE = "deeplabv3_scripted.ptl"
MODEL_OUTPUT_FILE = "deeplabv5_scripted.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

ahmadbajwa8282 avatar Sep 01 '22 06:09 ahmadbajwa8282

when i try to run ImageSegmentation demo, i get "model version must be between 3 and 7 but the model version is 8 ()" after a few search, i realized this may cause by the version diffience between pytorch optimizing model and the version in build.gradle, then i use the last version of 'org.pytorch:pytorch_android_lite' which is 1.12.2, and the problem gone. the last version use on mobile can be found at: https://search.maven.org/artifact/org.pytorch/pytorch_android_lite and pytorch on computor for optimizing model at: https://github.com/pytorch/vision/releases

wziwen avatar Nov 30 '22 09:11 wziwen

Don't know if this case still open, but I guess this will fix it:

Make sure you use in the android build.gradle

implementation 'org.pytorch:pytorch_android_lite:1.12.2' implementation 'org.pytorch:pytorch_android_torchvision_lite:1.12.2'

or any other higher pytorch library version on the android side.

nighthawk2032 avatar Dec 06 '22 16:12 nighthawk2032

@JosephKKim Hello. Were you able to fix the issue? If yes, can you please share your experience what did you modify. Thanks a lot in advance!

HripsimeS avatar Dec 13 '22 17:12 HripsimeS

Sharing what I experienced..

I've trained a Model using YOLOv5s with latest torch version, and export it to torchscript -- probably made as version 8 (in my case).

To fix the error which @JosephKKim initially presented (which I had as well) -- I just changed the library on the build.gradle used to : "implementation 'org.pytorch:pytorch_android_lite:1.12.2'" "implementation 'org.pytorch:pytorch_android_torchvision_lite:1.12.2'" Android ObjectDetection did run and identify the Model (after that change), however -- detection and predictions are with very low percentage, if at all.

I used the same Model on the iOS ObjectDetection -- https://github.com/pytorch/ios-demo-app/tree/master/ObjectDetection -- it ran flawlessly.. I had to change the code of the Podfile to use: "pod 'LibTorch-Lite', '~>1.13.0'"

So, I guess, the Model was trained and exported properly. But the android libraries are out-of-date? don't know exactly, still checking that.

nighthawk2032 avatar Dec 13 '22 18:12 nighthawk2032