CNTK icon indicating copy to clipboard operation
CNTK copied to clipboard

Port FasterRCNN models to remove dependence on custom Python layer and export model to ONNX

Open spandantiwari opened this issue 6 years ago • 9 comments

Currently Fast and Faster RCNN examples use custom Python code (ProposalLayer) which creates issues when evaluating these models in C++ and exporting these models to ONNX. This feature request tracks the porting work:

  1. Remove dependence on custom Python code
  2. Export model to ONNX.

spandantiwari avatar Mar 20 '18 22:03 spandantiwari

Is this work planned, is there an ETA to fix this? Or just a request at this point? Is there a suggested alternative model which does work with ONNX?

mattdot avatar May 21 '18 19:05 mattdot

We do not have a firm ETA on this, but this is definitely on our radar. This is more of an issue with ONNX which, as of today, does not have all the operators required for some popular object detection models, such as FasterRCNN and SSDs. There is some work on ONNX side on specing out these ops. Once that is done we can modify our implementation to work with these ops.

spandantiwari avatar May 21 '18 20:05 spandantiwari

I'm trying to work around this by using the __C.STORE_EVAL_MODEL_WITH_NATIVE_UDF option in FasterRCNN_config.py.

From my understanding the python function that handles this seems to be adding the path to the ProposalLayerLib dll (that is built from Examples/Extensibility/ProposalLayer/ProposalLayerLib in the CNTK source but not included in the python wheel) to the system %path%. Is this path getting baked into the saved model file? I haven't been move the model file elsewhere to load the proposal layer dll in my c++ app, I always get an exception of the form Plugin not found: Cntk.ProposalLayerLib-2.5.1.dll.

tjrileywisc avatar Jul 24 '18 13:07 tjrileywisc

Any update regarding on this issue?

lyemeeki avatar Sep 04 '18 09:09 lyemeeki

@lyemeeki - The status is pretty much the same as outlined. ONNX still does not have the requisite ops for FasterRCNN models.

FYI - there's a TinyYOLO ONNX model available in the onnx/models repo here.

spandantiwari avatar Sep 06 '18 17:09 spandantiwari

Now that ONNX supports Faster-RCNN, would you be able to provide the ability to export Faster-RCNN models to ONNX? Thanks.

brantPTS avatar Jun 28 '19 21:06 brantPTS

Faster-RCNN in CNTK uses custom OPs that aren't part of CNTK core Ops. You can use Faster R-CNN from PyTorch, Tensorflow or Keras in which we support converting them to ONNX.

ebarsoumMS avatar Jun 28 '19 22:06 ebarsoumMS

This problem also occurred when I used FasterRCNN for object detection and then model export to onnx, and I was going to export it to WinML.

GreenShadeZhang avatar Mar 31 '20 07:03 GreenShadeZhang

@GreenShadeZhang use FasterRCNN from PyTorch, this one is exportable to ONNX. The one in CNTK has a lot of Python custom code.

ebarsoum avatar Mar 31 '20 16:03 ebarsoum