iree icon indicating copy to clipboard operation
iree copied to clipboard

[runtime][pytorch-3dunet_vaiq]: element at index 0 (-0.712965) does not match the expected (-0.713635) for InstanceNormalization

Open pdhirajkumarprasad opened this issue 1 year ago • 2 comments

What happened?

I am seeing runtime mismatch for pytorch-3dunet_vaiq with error element at index 0 (-0.712965) does not match the expected (-0.713635) ;

module {
  func.func @torch_jit(%arg0: !torch.vtensor<[1,1,64,128,128],f32>) -> !torch.vtensor<[1,8,2097152],f32> attributes {torch.onnx_meta.ir_version = 8 : si64, torch.onnx_meta.opset_version = 17 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "1.13.1"} {
    %0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_module_1.weight> : tensor<1xf32>} : () -> !torch.vtensor<[1],f32> 
    %1 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_module_1.bias> : tensor<1xf32>} : () -> !torch.vtensor<[1],f32> 
    %2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_module_2.weight> : tensor<16x1x3x3x3xf32>} : () -> !torch.vtensor<[16,1,3,3,3],f32> 
    %none = torch.constant.none
    %3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<_> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %4 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__1> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %5 = torch.operator "onnx.QuantizeLinear"(%arg0, %3, %4) : (!torch.vtensor<[1,1,64,128,128],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1,1,64,128,128],si8> 
    %6 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__2> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %7 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__3> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %8 = torch.operator "onnx.DequantizeLinear"(%5, %6, %7) : (!torch.vtensor<[1,1,64,128,128],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1,1,64,128,128],f32> 
    %9 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__4> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %10 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__5> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %11 = torch.operator "onnx.QuantizeLinear"(%0, %9, %10) : (!torch.vtensor<[1],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1],si8> 
    %12 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__6> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %13 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__7> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %14 = torch.operator "onnx.DequantizeLinear"(%11, %12, %13) : (!torch.vtensor<[1],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1],f32> 
    %15 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__8> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %16 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__9> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %17 = torch.operator "onnx.QuantizeLinear"(%1, %15, %16) : (!torch.vtensor<[1],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1],si8> 
    %18 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__10> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %19 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__11> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %20 = torch.operator "onnx.DequantizeLinear"(%17, %18, %19) : (!torch.vtensor<[1],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1],f32> 
    %21 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__12> : tensor<3xsi64>} : () -> !torch.vtensor<[3],si64> 
    %22 = torch.operator "onnx.Reshape"(%8, %21) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[1,1,64,128,128],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[1,1,1048576],f32> 
    %23 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__13> : tensor<1xf32>} : () -> !torch.vtensor<[1],f32> 
    %24 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__14> : tensor<1xf32>} : () -> !torch.vtensor<[1],f32> 
    %25 = torch.operator "onnx.InstanceNormalization"(%22, %23, %24) {torch.onnx.epsilon = 9.99999974E-6 : f32} : (!torch.vtensor<[1,1,1048576],f32>, !torch.vtensor<[1],f32>, !torch.vtensor<[1],f32>) -> !torch.vtensor<[1,1,1048576],f32> 
    %26 = torch.operator "onnx.Shape"(%8) : (!torch.vtensor<[1,1,64,128,128],f32>) -> !torch.vtensor<[5],si64> 
    %27 = torch.operator "onnx.Reshape"(%25, %26) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[1,1,1048576],f32>, !torch.vtensor<[5],si64>) -> !torch.vtensor<[1,1,64,128,128],f32> 
    %28 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__15> : tensor<3xsi64>} : () -> !torch.vtensor<[3],si64> 
    %29 = torch.operator "onnx.Unsqueeze"(%14, %28) : (!torch.vtensor<[1],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[1,1,1,1],f32> 
    %30 = torch.operator "onnx.Mul"(%27, %29) : (!torch.vtensor<[1,1,64,128,128],f32>, !torch.vtensor<[1,1,1,1],f32>) -> !torch.vtensor<[1,1,64,128,128],f32> 
    %31 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__16> : tensor<3xsi64>} : () -> !torch.vtensor<[3],si64> 
    %32 = torch.operator "onnx.Unsqueeze"(%20, %31) : (!torch.vtensor<[1],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[1,1,1,1],f32> 
    %33 = torch.operator "onnx.Add"(%30, %32) : (!torch.vtensor<[1,1,64,128,128],f32>, !torch.vtensor<[1,1,1,1],f32>) -> !torch.vtensor<[1,1,64,128,128],f32> 
    %34 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__17> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %35 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__18> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %36 = torch.operator "onnx.QuantizeLinear"(%33, %34, %35) : (!torch.vtensor<[1,1,64,128,128],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1,1,64,128,128],si8> 
    %37 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__19> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %38 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__20> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %39 = torch.operator "onnx.DequantizeLinear"(%36, %37, %38) : (!torch.vtensor<[1,1,64,128,128],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1,1,64,128,128],f32> 
    %40 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__21> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %41 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__22> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %42 = torch.operator "onnx.QuantizeLinear"(%2, %40, %41) : (!torch.vtensor<[16,1,3,3,3],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[16,1,3,3,3],si8> 
    %43 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__23> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %44 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__24> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %45 = torch.operator "onnx.DequantizeLinear"(%42, %43, %44) : (!torch.vtensor<[16,1,3,3,3],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[16,1,3,3,3],f32> 
    %46 = torch.operator "onnx.Conv"(%39, %45) {torch.onnx.dilations = [1 : si64, 1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [3 : si64, 3 : si64, 3 : si64], torch.onnx.pads = [1 : si64, 1 : si64, 1 : si64, 1 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64, 1 : si64]} : (!torch.vtensor<[1,1,64,128,128],f32>, !torch.vtensor<[16,1,3,3,3],f32>) -> !torch.vtensor<[1,16,64,128,128],f32> 
    %47 = torch.operator "onnx.Relu"(%46) : (!torch.vtensor<[1,16,64,128,128],f32>) -> !torch.vtensor<[1,16,64,128,128],f32> 
    %48 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__25> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %49 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__26> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %50 = torch.operator "onnx.QuantizeLinear"(%47, %48, %49) : (!torch.vtensor<[1,16,64,128,128],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1,16,64,128,128],si8> 
    %51 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__27> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %52 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__28> : tensor<si8>} : () -> !torch.vtensor<[],si8> 
    %53 = torch.operator "onnx.DequantizeLinear"(%50, %51, %52) : (!torch.vtensor<[1,16,64,128,128],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],si8>) -> !torch.vtensor<[1,16,64,128,128],f32> 
    %54 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__29> : tensor<3xsi64>} : () -> !torch.vtensor<[3],si64> 
    %55 = torch.operator "onnx.Reshape"(%53, %54) {torch.onnx.allowzero = 0 : si64} : (!torch.vtensor<[1,16,64,128,128],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[1,8,2097152],f32> 
    %56 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__30> : tensor<8xf32>} : () -> !torch.vtensor<[8],f32> 
    %57 = torch.operator "onnx.Constant"() {torch.onnx.value = dense_resource<__31> : tensor<8xf32>} : () -> !torch.vtensor<[8],f32> 
    %58 = torch.operator "onnx.InstanceNormalization"(%55, %56, %57) {torch.onnx.epsilon = 9.99999974E-6 : f32} : (!torch.vtensor<[1,8,2097152],f32>, !torch.vtensor<[8],f32>, !torch.vtensor<[8],f32>) -> !torch.vtensor<[1,8,2097152],f32> 
    return %58 : !torch.vtensor<[1,8,2097152],f32>
  }
}

{-#
  dialect_resources: {
    builtin: {
      _module_1.weight: "0x080000000000643F",
      _module_1.bias: "0x080000000000AEBD",
      _module_2.weight: "0x08000000000000BD0000C43D0000D4BD000016BE000080BD0000C43D0000D03C0000483E0000583D0000D0BC0000C8BD000038BD000068BE000028BE0000BCBD0000A0BC0000843D0000023E000024BE0000D0BD00008C3D0000FC3D000070BD00000A3E0000F0BC0000B03C0000443E000042BE00000CBE000068BD0000E4BD0000E83D00000CBE0000ECBD00001CBE000038BE0000F8BD0000183E0000A83D0000943D000080BC0000D0BD0000803C000040BE000010BE0000A8BD0000063E0000063E0000A0BD0000000000000E3E0000483E0000B03D0000303D0000CC3D000010BE0000C03B000032BE00001CBE0000F0BD0000683D0000683D00000ABE0000883D0000F03D000080BC0000903C0000883D0000083E0000423E00000ABE000070BD0000783D00003E3E0000443E00003C3E0000983D000022BE0000203C0000D8BD000018BE00005C3E00003E3E00001ABE00009C3D000000BB000040BC000010BD0000D43D0000BCBD0000D43D0000EC3D0000103E0000C83D000038BE000002BE0000023E0000503D00001CBE000098BD0000343E0000C43D000078BD000070BD000064BE0000A03C00001ABE000044BE0000003B0000103D000088BD0000EC3D0000F4BD0000403E0000163E00001ABE000000BD0000803C0000A03C0000843D0000A43D0000003C0000ACBD0000D83D000036BE0000C4BD000010BD0000CCBD000048BD000010BE000028BD0000703D0000C4BD0000003B0000263E0000603C0000003D0000A03C00008C3D00000C3E0000503E00001E3E00004E3E0000C03C000036BE0000DCBD0000D83D000060BC0000ECBD000022BE000034BE00002ABE000050BD0000483D0000183D000040BE0000803D000040BE00009CBD0000C4BD000088BD000050BD00003ABE000008BE0000403C0000B0BC000094BD0000D43D00003CBE000080BB00004C3E0000BCBD0000343E0000C03D000040BE0000F43D0000203C0000D0BD0000183E0000F0BD0000203C0000E0BC0000203E0000183D0000C83D0000B83D000068BD0000F0BC000002BE0000083E00004E3E0000523E00000EBE0000B03C000094BD0000FCBD0000D8BD0000D43D0000E8BD0000003B0000503D0000383D000030BE0000E0BD000034BE0000943D000088BD00001A3E0000A03D0000843D0000D0BC000038BD0000283D0000A4BD0000D0BC0000203D00003ABE0000FCBD00000A3E00002CBE000038BD0000323E0000A43D0000C03B000022BE0000B43D00000CBE000028BD000058BD0000D0BC0000003C00003E3E0000383D0000523E0000AC3D00000CBE0000843D000012BE0000C4BD0000503D00009C3D0000203E000018BD000060BC0000583D000052BE0000F83D000034BE000000BD000080BC0000D8BD0000803B0000103E00009CBD000002BE000084BD000044BE0000883D000078BD0000F4BD0000D8BD00000C3E0000F83D0000263E0000E43D0000F0BC0000D0BC0000E43D0000463E0000BC3D0000CCBD000030BE000076BE0000003D000036BE000080BE0000463E000002BE000018BD0000103D000068BD000038BE000030BE0000E03D0000B0BC0000343E000090BD0000203D0000C0BC000040BC0000F4BD0000A43D0000583D000040BC000000BB0000C4BD0000123E0000E4BD0000503D000058BD0000CCBD000020BC0000C4BD0000C03B000000BC000016BE0000803C0000483D0000883D000078BD00003ABE000068BD0000403C0000D0BD0000C03C00000E3E00008CBD0000E03C0000903D0000A03C0000403C0000903C00000CBE0000003B0000EC3D0000223E0000C03D0000C03C0000AC3D0000CCBD000022BE000022BE00003E3E00000C3E0000ACBD0000AC3D00004E3E00007A3E00003E3E0000463E0000A03C0000E0BC00000C3E0000D43D0000F4BD0000343E0000D03D0000E83D0000D4BD0000943D000078BD0000363E000000BD00008C3D0000A03D0000DC3D000050BD0000163E0000CCBD0000843D0000ECBD000056BE000042BE0000323E0000B43D00001C3E0000463E00002ABE000008BE000060BC000024BE0000A43D0000CCBD0000C0BD000064BE000080BD0000363E00001C3E00000A3E00001E3E0000383E000060BD0000D03D0000503E0000A03D0000663E0000F83D0000C03D0000B0BD0000C03D0000003C000038BD0000C4BD0000F8BD00000EBE00001A3E0000C83D000070BD000010BE0000DC3D0000B83D000060BC000010BE0000103E0000383D0000F4BD0000623E0000443E000038BD0000123E00009CBD000000BB0000443E0000403D000002BE000028BD000020BD000000BD0000F03C00004A3E0000903C0000F8BD0000D03C0000783D0000A83D000040BE000090BD00002CBE0000BCBD0000E03C00003CBE0000603D",
      _: "0x080000000000803D",
      __1: "0x0800000000",
      __2: "0x080000000000803D",
      __3: "0x0800000000",
      __4: "0x080000000000003C",
      __5: "0x0800000000",
      __6: "0x080000000000003C",
      __7: "0x0800000000",
      __8: "0x080000000000803A",
      __9: "0x0800000000",
      __10: "0x080000000000803A",
      __11: "0x0800000000",
      __12: "0x0800000000000000000000000100000000000000FFFFFFFFFFFFFFFF",
      __13: "0x080000000000803F",
      __14: "0x0800000000000000",
      __15: "0x08000000010000000000000002000000000000000300000000000000",
      __16: "0x08000000010000000000000002000000000000000300000000000000",
      __17: "0x080000000000003D",
      __18: "0x0800000000",
      __19: "0x080000000000003D",
      __20: "0x0800000000",
      __21: "0x080000000000003B",
      __22: "0x0800000000",
      __23: "0x080000000000003B",
      __24: "0x0800000000",
      __25: "0x080000000000803C",
      __26: "0x0800000000",
      __27: "0x080000000000803C",
      __28: "0x0800000000",
      __29: "0x0800000000000000000000000800000000000000FFFFFFFFFFFFFFFF",
      __30: "0x080000000000803F0000803F0000803F0000803F0000803F0000803F0000803F0000803F",
      __31: "0x080000000000000000000000000000000000000000000000000000000000000000000000"
    }
  }
#-}

Uploading input.0.bin.txt…

Steps to reproduce your issue

steps to reproduce:

iree-compile model.torch_onnx.mlir --iree-hal-target-backends=llvm-cpu -o out.vmfb
iree-run-module --module=out.vmfb --device="local-task" --input="[email protected]" --expected_output="1x8x2097152xf32=@golden_output.0.bin"

IREE Version: IREE compiler version 20240819.990 @ aeda14995f16ed1302db616adf0c03acf80f27ee LLVM version 20.0.0git

Not able to upload golden_output.0.bin.txt file as size is 65M.

What component(s) does this issue relate to?

Runtime

Version information

No response

Additional context

No response

pdhirajkumarprasad avatar Aug 13 '24 13:08 pdhirajkumarprasad

Duplicate of https://github.com/iree-org/iree/issues/18200#issuecomment-2286472770? Both might be a problem with DequantizeLinear

IanWood1 avatar Aug 13 '24 16:08 IanWood1

@IanWood1 This is an issue exclusively with InstanceNormalization when used on certain inputs. I'll work with @pdhirajkumarprasad on getting a smaller reproducer. It may be a front end (e.g. torch-mlir) issue.

zjgarvey avatar Aug 13 '24 19:08 zjgarvey

closing this as issue not seen, we are working on analyzing numeric failure and will file/monitor separately

pdhirajkumarprasad avatar Dec 02 '24 05:12 pdhirajkumarprasad