onnx-mlir icon indicating copy to clipboard operation
onnx-mlir copied to clipboard

parse Optional op's TYPE_PROTO attribute

Open sorenlassen opened this issue 2 years ago • 1 comments

this PR solves the problem that the onnx parser cannot parse Optional op's type_proto attribute (it crashes in ImportType or convertOnnxAttributeProtoToMlirNamedAttribute in FrontendDialectTransformer, depending on whether the --useOnnxModelTypes flag is used)

this PR was tested by running:

Debug/bin/onnx-mlir -v --EmitONNXBasic --useOnnxModelTypes ../third_party/onnx/onnx/backend/test/data/node/test_if_opt/model.onnx -o /tmp/test_if_opt

and then:

Debug/bin/onnx-mlir-opt /tmp/test_if_opt.onnx.mlir

to verify that the resulting mlir assembly is well-formed

sorenlassen avatar Aug 11 '22 15:08 sorenlassen

@chentong319 added you as a reviewer since it is a frontend change. Thanks for looking into reviewing this pr

AlexandreEichenberger avatar Aug 15 '22 16:08 AlexandreEichenberger

The change to import OptType looks good. Any idea about adding the code to handle OptType in future PR?

Please merge the PR, then I'll rename GetTensorByOnnxName in a follow-up PR (which will include a bunch of edits to code comments in SymbolTable.hpp)

I'm interested in lowering the optional ops and doing type conversion for OptType in KrnlTypeConverter. I gather I should represent an opt type as a pair of a tensor/seq memref and a boolean scalar (true means the tensor/seq is populated, false means the optional has no tensor/seq). Does that sound right? Any pointers about how to do it are welcome.

sorenlassen avatar Aug 16 '22 20:08 sorenlassen

Jenkins Linux s390x Build #6982 [push] parse Optional ops TYPE_... started at 14:27

jenkins-droid avatar Aug 17 '22 18:08 jenkins-droid

Jenkins Linux ppc64le Build #6075 [push] parse Optional ops TYPE_... started at 14:28

jenkins-droid avatar Aug 17 '22 18:08 jenkins-droid

Jenkins Linux amd64 Build #6967 [push] parse Optional ops TYPE_... started at 13:27

jenkins-droid avatar Aug 17 '22 18:08 jenkins-droid

Jenkins Linux amd64 Build #6967 [push] parse Optional ops TYPE_... passed after 1 hr 13 min

jenkins-droid avatar Aug 17 '22 19:08 jenkins-droid

Jenkins Linux ppc64le Build #6075 [push] parse Optional ops TYPE_... passed after 1 hr 35 min

jenkins-droid avatar Aug 17 '22 20:08 jenkins-droid

Jenkins Linux s390x Build #6982 [push] parse Optional ops TYPE_... passed after 1 hr 38 min

jenkins-droid avatar Aug 17 '22 20:08 jenkins-droid

@sorenlassen I'd like to provide you info about how optional arguments for onnx operations are supported in onnx-mlir, if the Optional Type is related. In older version of onnx, arguments of an operation may be optional. This tag is associated with the argument, not the Type. The unprovided argument is represented with onnx-mlir None type, or zero size tensor (e.g. tensor<0xf32>), depending on who generated the onnx model. When the onnx operation is handled (in shape inference, rewriting, or lowering), a utility function isFromNone(Value) is used to check whether the optional input is provided or not. I feel that Optional Type can be translated into optional input in most cases. That mean the semantic of optional is handled statically inside onnx-mlir. The only place I ran into trouble is to convert an onnx operation into an external function call when it is better to handle the optional in the call. How do you think?

chentong319 avatar Aug 18 '22 13:08 chentong319

@chentong319 I believe a different approach is needed for Optional Type because you need to decide at runtime if it's populated or not, so you cannot use None type or isFromNone. At runtime you'll need to use the OptionalHasElement and OptionalGetElement to query and access anything of Optional Type. For example, see export_loop_16_none() in onnx/backend/test/case/node/loop.py which looks like this in mlir:

"func.func"() ({
  ^bb0(%arg0: tensor<i64>, %arg1: tensor<i1>, %arg2: !onnx.Opt<!onnx.Seq<tensor<f32>>>):
    %0 = "onnx.Loop"(%arg0, %arg1, %arg2) ({
    ^bb0(%arg3: tensor<i64>, %arg4: tensor<i1>, %arg5: !onnx.Opt<!onnx.Seq<tensor<f32>>>):
      %1 = "onnx.Identity"(%arg4) : (tensor<i1>) -> tensor<i1>
      %2 = "onnx.OptionalHasElement"(%arg5) : (!onnx.Opt<!onnx.Seq<tensor<f32>>>) -> tensor<i1>
      %3 = "onnx.Not"(%2) : (tensor<i1>) -> tensor<i1>
      %4 = "onnx.If"(%3) ({
        %16 = "onnx.OptionalGetElement"(%arg5) : (!onnx.Opt<!onnx.Seq<tensor<f32>>>) -> !onnx.Seq<tensor<f32>>
        "onnx.Return"(%16) : (!onnx.Seq<tensor<f32>>) -> ()
      }, {
        %16 = "onnx.Constant"() {value = dense<0.000000e+00> : tensor<f32>} : () -> tensor<f32>
        %17 = "onnx.SequenceConstruct"(%16) : (tensor<f32>) -> !onnx.Seq<tensor<f32>>
        "onnx.Return"(%17) : (!onnx.Seq<tensor<f32>>) -> ()
      }) {input_names = [], output_names = ["init_seq_in"]} : (tensor<i1>) -> !onnx.Seq<tensor<f32>>
      %5 = "onnx.Constant"() {value = dense<[1.000000e+00, 2.000000e+00, 3.000000e+00, 4.000000e+00, 5.000000e+00]> : tensor<5xf32>} : () -> tensor<5xf32>
      %6 = "onnx.Constant"() {value = dense<1> : tensor<i64>} : () -> tensor<i64>
      %7 = "onnx.Constant"() {value = dense<0> : tensor<1xi64>} : () -> tensor<1xi64>
      %8 = "onnx.Add"(%arg3, %6) : (tensor<i64>, tensor<i64>) -> tensor<i64>
      %9 = "onnx.Constant"() {value = dense<0> : tensor<i64>} : () -> tensor<i64>
      %10 = "onnx.Unsqueeze"(%8, %9) : (tensor<i64>, tensor<i64>) -> tensor<1xi64>
      %11 = "onnx.NoValue"() {value} : () -> none
      %12 = "onnx.NoValue"() {value} : () -> none
      %13 = "onnx.Slice"(%5, %7, %10, %11, %12) : (tensor<5xf32>, tensor<1xi64>, tensor<1xi64>, none, none) -> tensor<*xf32>
      %14 = "onnx.NoValue"() {value} : () -> none
      %15 = "onnx.SequenceInsert"(%4, %13, %14) : (!onnx.Seq<tensor<f32>>, tensor<*xf32>, none) -> !onnx.Seq<tensor<f32>>
      "onnx.Return"(%1, %15) : (tensor<i1>, !onnx.Seq<tensor<f32>>) -> ()
    }) {input_names = ["iter_count", "cond_in", "opt_seq_in"], output_names = ["cond_out", "seq_out"]} : (tensor<i64>, tensor<i1>, !onnx.Opt<!onnx.Seq<tensor<f32>>>) -> tensor<i1>
    "func.return"(%0) : (tensor<i1>) -> ()
  }) {function_type = (tensor<i64>, tensor<i1>, !onnx.Opt<!onnx.Seq<tensor<f32>>>) -> !onnx.Seq<tensor<*xf32>>, input_names = ["trip_count", "cond", "opt_seq"], output_names = ["seq_res"], sym_name = "main_graph"} : () -> ()

which is generated with:

Debug/bin/onnx-mlir -v --EmitONNXBasic --useOnnxModelTypes ../third_party/onnx/onnx/backend/test/data/node/test_loop16_seq_none/model.onnx -o mlirassembly

(counterintuitively, the else branch comes before the then branch in the mlir assembly, I have sent a fix out for code review in PR #1609)

sorenlassen avatar Aug 18 '22 14:08 sorenlassen