torch-mlir icon indicating copy to clipboard operation
torch-mlir copied to clipboard

TorchOnnxToTorch doesn't support onnx.Conv

Open muwys518 opened this issue 2 years ago • 5 comments

First step I got model.onnx and use 1.torch-mlir-import-onnx model.onnx -o model_onnx_torch.mlir convert success Then I use 2.torch-mlir-opt model_onnx_torch.mlir --convert-torch-onnx-to-torch -o model_torch.mlir convert success Then I use 3. torch-mlir-opt model_torch.mlir --convert-torch-to-tosa -o model_tosa.mlir I've got error like this error: failed to legalize operation 'torch.operator' that was explicitly marked illegal %50 = torch.operator "onnx.Conv"(%arg0, %0, %1) {torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [1 : si64, 1 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 0 : si64, 0 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,6,672,672],f32>, !torch.vtensor<[24,6,1,1],f32>, !torch.vtensor<[24],f32>) -> !torch.vtensor<[1,24,672,672],f32>

I found torch.operator "onnx.Conv" doesn't support convert in step 2 please help me

muwys518 avatar Dec 18 '23 06:12 muwys518

This project is very actively working on the op support list but it has only been in progress for a couple of weeks. See the status update here, which includes a link to the issue being used to track on support.

https://discourse.llvm.org/t/rfc-onnx-import-into-torch-mlir/75018/6?u=stellaraccident

stellaraccident avatar Dec 18 '23 07:12 stellaraccident

It looks like someone is working on this op now: https://github.com/nod-ai/SHARK-Turbine/issues/253

stellaraccident avatar Dec 18 '23 07:12 stellaraccident

c1.mlir:104:12: error: failed to legalize operation 'torch.operator' that was explicitly marked illegal %101 = torch.operator "onnx.Conv"(%arg0, %4) {torch.onnx.auto_pad = "SAME_LOWER", torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.kernel_shape = [3 : si64, 3 : si64], torch.onnx.strides = [2 : si64, 2 : si64]} : (!torch.vtensor<[1,3,416,416],f32>, !torch.vtensor<[32,3,3,3],f32>) -> !torch.vtensor<[1,32,208,208],f32> ^ c1.mlir:104:12: note: see current operation: %101 = "torch.operator"(%arg0, %4) <{name = "onnx.Conv"}> {torch.onnx.auto_pad = "SAME_LOWER", torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.kernel_shape = [3 : si64, 3 : si64], torch.onnx.strides = [2 : si64, 2 : si64]} : (!torch.vtensor<[1,3,416,416],f32>, !torch.vtensor<[32,3,3,3],f32>) -> !torch.vtensor<[1,32,208,208],f32>

auto pad dose not support convert?

muwys518 avatar Jan 29 '24 09:01 muwys518

IR to reproduce the issue:

module {
  func.func @CNTKGraph(%arg0: !torch.vtensor<[1,1,28,28],f32>, %arg1: !torch.vtensor<[8,1,5,5],f32>) -> !torch.vtensor<[1,8,28,28],f32> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 17 : si64, torch.onnx_meta.producer_name = "CNTK", torch.onnx_meta.producer_version = "2.5.1"} {
    %8 = torch.operator "onnx.Conv"(%arg0, %arg1) {torch.onnx.auto_pad = "SAME_UPPER", torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [5 : si64, 5 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1,28,28],f32>, !torch.vtensor<[8,1,5,5],f32>) -> !torch.vtensor<[1,8,28,28],f32> 
    return %8 : !torch.vtensor<[1,8,28,28],f32>
  }
}

pdhirajkumarprasad avatar Sep 06 '24 04:09 pdhirajkumarprasad

module {
  func.func @CNTKGraph(%arg0: !torch.vtensor<[1,1,28,28],f32>, %arg1: !torch.vtensor<[8,1,5,5],f32>) -> !torch.vtensor<[1,8,28,28],f32> attributes {torch.onnx_meta.ir_version = 7 : si64, torch.onnx_meta.opset_version = 17 : si64, torch.onnx_meta.producer_name = "CNTK", torch.onnx_meta.producer_version = "2.5.1"} {
    %8 = torch.operator "onnx.Conv"(%arg0, %arg1) {torch.onnx.auto_pad = "SAME_UPPER", torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [5 : si64, 5 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1,28,28],f32>, !torch.vtensor<[8,1,5,5],f32>) -> !torch.vtensor<[1,8,28,28],f32> 
    return %8 : !torch.vtensor<[1,8,28,28],f32>
  }
}

The issue is fixed by https://github.com/llvm/torch-mlir/pull/3670

vivekkhandelwal1 avatar Sep 06 '24 16:09 vivekkhandelwal1