openvino icon indicating copy to clipboard operation
openvino copied to clipboard

OpenVINO does not support the following ONNX operations: com.microsoft.FusedConv [Bug]

Open PaulCahuana opened this issue 3 years ago • 2 comments

System information
  • OpenVINO => 2022.1.0
  • Operating System / Platform => Ubuntu 22.04
  • Problem classification: Model Conversion
  • Framework: ONNX
  • Architecture name: OpenSeeFace from (https://github.com/emilianavt/OpenSeeFace) - Model name: lm_model3_opt.onnx
Detailed description

I was trying to load the model with openvino but I got the error: "RuntimeError: Check 'unknown_operators.empty()' failed at frontends/onnx/frontend/src/core/graph.cpp:133: OpenVINO does not support the following ONNX operations: com.microsoft.FusedConv"

Steps to reproduce

from openvino.runtime import Core ie = Core() model= ie.read_model(set_config_value("lm_model3_opt.onnx"))

  • and after I got the error *

So, do you have plans to add the "com.microsoft.FusedConv" layer?

PaulCahuana avatar Sep 22 '22 18:09 PaulCahuana

Hey guys, I have the same issue when trying to convert this OpenSee model using the OpenVINO Workbench.

campos537 avatar Sep 22 '22 18:09 campos537

Hello @PaulCahuana, @campos537,

Thank you for reaching OpenVINO!

This operation is not from the standard ONNX opset while from the Microsoft ONNX RT extended opset. Currently supported fused ops are listed here

We should discuss this internally how to properly proceed with that. Stay tuned!

CC @mlukasze @tomdol

Ref. 92500

andrei-kochin avatar Sep 22 '22 20:09 andrei-kochin

Hello @PaulCahuana, @campos537,

Thank you for reaching OpenVINO!

This operation is not from the standard ONNX opset while from the Microsoft ONNX RT extended opset. Currently supported fused ops are listed here

We should discuss this internally how to properly proceed with that. Stay tuned!

CC @mlukasze @tomdol

Sure. Thanks for all!

PaulCahuana avatar Sep 23 '22 13:09 PaulCahuana

Hello @PaulCahuana! The support of FusedConv was added in https://github.com/openvinotoolkit/openvino/pull/13553 The model lm_model3_opt.onnx can be loaded and inference now. Tested via benchmark_app (./benchmark_app -shape [1,3,224,224] -m lm_model3_opt.onnx).

Please let me know, if the change conver all your cases and if we can close the issue.

mbencer avatar Oct 19 '22 13:10 mbencer

Wow, that is really great! This model is awesome!

campos537 avatar Oct 19 '22 13:10 campos537

Detailing Information presented as follows

Microsoft ONNX Runtime is an open source inference accelerator focused on ONNX models. It is the platform Vitis AI has integrated with to provide first-class ONNX model support, which can be exported from a wide variety of training frameworks. It incorporates very easy to use runtime APIs in Python and C++ and can support models without requiring the separate compilation phase that TVM requires. Included in ONNXRuntime is a partitioner that can automatically partition between the CPU and FPGA further enhancing the ease of model deployment. Finally, it also incorporates the Vitis AI quantizer in a way that does not require separate quantization setup.

com.microsoft.FusedConv

The fused convolution operator schema is the same as Convolution besides it includes an attribute activation.

This version of the operator has been available since version 1 of the 'com.microsoft' operator set. Attributes

activation : string activation_params : list of floats auto_pad : string dilations : list of ints group : int kernel_shape : list of ints pads : list of ints strides : list of ints

Inputs (2 - 4)

X : T W : T B (optional) : T Z (optional) : T

Outputs

Y : T

Type Constraints

T : tensor(float16), tensor(float), tensor(double) Constrain input and output types to float tensors

Apollo9999 avatar Oct 26 '22 14:10 Apollo9999

Closing this, as PR with support for FusedConv op has been merged. Feel free to reopen to ask any questions related to this topic.

avitial avatar Nov 29 '22 22:11 avitial