CNTK
CNTK copied to clipboard
Port FasterRCNN models to remove dependence on custom Python layer and export model to ONNX
Currently Fast and Faster RCNN examples use custom Python code (ProposalLayer) which creates issues when evaluating these models in C++ and exporting these models to ONNX. This feature request tracks the porting work:
- Remove dependence on custom Python code
- Export model to ONNX.
Is this work planned, is there an ETA to fix this? Or just a request at this point? Is there a suggested alternative model which does work with ONNX?
We do not have a firm ETA on this, but this is definitely on our radar. This is more of an issue with ONNX which, as of today, does not have all the operators required for some popular object detection models, such as FasterRCNN and SSDs. There is some work on ONNX side on specing out these ops. Once that is done we can modify our implementation to work with these ops.
I'm trying to work around this by using the __C.STORE_EVAL_MODEL_WITH_NATIVE_UDF
option in FasterRCNN_config.py.
From my understanding the python function that handles this seems to be adding the path to the ProposalLayerLib dll (that is built from Examples/Extensibility/ProposalLayer/ProposalLayerLib in the CNTK source but not included in the python wheel) to the system %path%. Is this path getting baked into the saved model file? I haven't been move the model file elsewhere to load the proposal layer dll in my c++ app, I always get an exception of the form Plugin not found: Cntk.ProposalLayerLib-2.5.1.dll
.
Any update regarding on this issue?
@lyemeeki - The status is pretty much the same as outlined. ONNX still does not have the requisite ops for FasterRCNN models.
FYI - there's a TinyYOLO ONNX model available in the onnx/models repo here.
Now that ONNX supports Faster-RCNN, would you be able to provide the ability to export Faster-RCNN models to ONNX? Thanks.
Faster-RCNN in CNTK uses custom OPs that aren't part of CNTK core Ops. You can use Faster R-CNN from PyTorch, Tensorflow or Keras in which we support converting them to ONNX.
This problem also occurred when I used FasterRCNN for object detection and then model export to onnx, and I was going to export it to WinML.
@GreenShadeZhang use FasterRCNN from PyTorch, this one is exportable to ONNX. The one in CNTK has a lot of Python custom code.