onnx2pytorch
onnx2pytorch copied to clipboard
Inconsistency between pytorch and onnxruntime
I build an onnx graph with only one BatchNormalization layer and two transpose layer as follows:

However, the output between onnxruntime and pytorch's result is different when converting onnx to pytorch:
To reproduce: Please access the model through this link: https://drive.google.com/file/d/1KQ-ZvdghB2Fw0b1U42M1Q8DFfJzKMKuF/view?usp=sharing
import onnx
from onnx2pytorch import ConvertModel
import numpy as np
import torch
input = np.random.rand(1,3,3,3)
onnx_model = onnx.load("incon.onnx")
torch_model = ConvertModel(onnx_model, experimental=True)
torch_input = torch.from_numpy(input)
torch_model.double()
torch_pred = torch_model(torch_input)
import onnxruntime as ort
providers = [
('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 10 * 1024 * 1024 * 1024, # 10G
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
}),
'CPUExecutionProvider',
]
onnx_path = "incon.onnx"
onnx_model = ort.InferenceSession(onnx_path, providers=providers)
input_name = onnx_model.get_inputs()[0].name
output_name = onnx_model.get_outputs()[0].name
input = input.astype('float32')
onnx_pred = onnx_model.run([output_name], {input_name: input})[0]
print("When running on CPU: ", onnx_pred[0])
Result of ONNXRuntime:
Result of PyTorch:
tensor([[[[ 0.8651, 0.4476, 1.0992],
[-1.4390, -1.9661, 0.6654],
[-0.0661, 0.8117, 0.4977]],
[[ 1.4929, -1.0736, -1.8301],
[-0.3819, 0.1353, -1.3534],
[-0.5915, 1.0055, 0.1152]],
[[ 1.4507, 0.9089, 0.7818],
[-0.2749, 0.6187, 0.7877],
[-1.0552, -0.8879, -0.7635]]]], dtype=torch.float64,
grad_fn=<PermuteBackward0>)
Result of ONNXRuntime:
[[[0.6175244 0.7708563 0.99578035]
[0.06858005 0.03348556 0.86044073]
[0.39565903 0.8820858 0.8081321 ]]
[[0.7670855 0.30615166 0.08200629]
[0.32042617 0.6754674 0.2307212 ]
[0.27048445 0.94129366 0.6888292 ]]
[[0.75703746 0.9117755 0.89674425]
[0.3459281 0.8231245 0.89860445]
[0.16000679 0.3628597 0.41474345]]]
I have the same problem.