android-demo-app
android-demo-app copied to clipboard
Using D2Go's mask_rcnn model in the demo app
The D2Go Android demo app uses a faster_rcnn_fbnet3a_C4 model to make predictions. I tried to use the mask_rcnn_fbnet3a_C4 model from the model zoo to create a .pt file using the create_d2go file, but this does not work, as the .jit file creation gives the "AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d." erorr.
However, this .jit file can be created by exporting the model using d2go's own torchscript exporter. I edited the create_d2go file in such a way that it loads the mask_rcnn config and the .jit file I got from the torchscript export, skipping the wrapping, and a .pt file is created without an issue. However, when opening the app, it crashes. Specifically, it gives me:
Unknown builtin op: _caffe2::GenerateProposals. Could not find any similar ops to _caffe2::GenerateProposals. This op may not exist or may not be currently supported in TorchScript.
I am rather new to exporting and am wondering if the support for this model is simply not there, or if there is something wrong with my method. Did somebody get this to work, or know more about this?
any updates ?
https://github.com/facebookresearch/d2go/issues/40 is related to this, but a fix is not yet there as far as I know.
did you try to use caffe2 runtime with .pb exported model ?
@fmassa please take a look?
- could you try using
torchscript@tracing
when exporting the model? This will export the model using torchvision ops instead of caffe2 ops - If you'd like to use int8 mode, the mode should be
torchscript_int8@tracing
. However, this command doesn't work for mask rcnn for now. We will update the code and model soon.
Hi, is there any good news to use the mask rcnn model in the app?