onnx-tensorflow
onnx-tensorflow copied to clipboard
KeyError with PyTorch either Pad or GlobalAvgPooling
Hi,
I met an issue with converting from Pytorch -> onnx -> TF.
During the prepare step, I have the following error message:
Traceback (most recent call last): │·
File "convert_to_tf.py", line 35, in
Through some search, I found maybe the pad operator should have 4 parameters but now it only has 2 in onnx (from issue https://github.com/onnx/onnx-tensorflow/issues/21). I am wondering how can I fix this or any suggestion is welcome.
My Pytorch model is a simple UNet with a global average pooling layer at the final.
Here are the two possible error places:
self.global_pool = nn.AdaptiveAvgPool2d((1, 1))
or
diffY = x2.size()[2] - x1.size()[2] │· diffX = x2.size()[3] - x1.size()[3] │· │· x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2, │· diffY // 2, diffY - diffY // 2])
TF Version: 2.8.0 onnx-tf Version: 1.10.0 onnx version: 1.12.0 python version: 3.7.10
Thanks!
As stated in the README of this repository, it seems to have been deprecated.
So I have started to create and test another tool by myself. If you don't mind, could you share the ONNX file with me? I would like to test your ONNX file.
Here is the tool I am creating. https://github.com/PINTO0309/onnx2tf
@PINTO0309 Thanks for your reply. It will be great if you can try my onnx file. Here it is: https://drive.google.com/file/d/1WmzBLW2HExUmEq-auEZ1bc1YAPooOUgo/view?usp=share_link
Nothing went wrong and the conversion appears to be correct. I am only verifying that the conversion completes successfully and have not yet verified whether the accuracy is degraded.
$ python -C "import tensorflow as tf;tf.__version__"
2.10.0
$ python -V
Python 3.8.10
$ pip install -U onnx \
&& pip install -U nvidia-pyindex \
&& pip install -U onnx-graphsurgeon \
&& pip install -U onnxsim \
&& pip install -U simple_onnx_processing_tools \
&& pip install -U onnx2tf
$ onnx2tf -V
1.1.46
or
$ docker run --rm -it \
-v `pwd`:/workdir \
-w /workdir \
ghcr.io/pinto0309/onnx2tf:1.1.46
$ onnx2tf -i tf.onnx
Simplifying...
Finish! Here is the difference:
┏━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ ┃ Original Model ┃ Simplified Model ┃
┡━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Cast │ 31 │ 0 │
│ Concat │ 14 │ 4 │
│ Constant │ 78 │ 0 │
│ ConstantOfShape │ 4 │ 0 │
│ Conv │ 18 │ 18 │
│ ConvTranspose │ 4 │ 4 │
│ Div │ 12 │ 0 │
│ Equal │ 1 │ 0 │
│ Gather │ 10 │ 0 │
│ Gemm │ 1 │ 1 │
│ GlobalAveragePool │ 1 │ 1 │
│ Identity │ 2 │ 0 │
│ If │ 1 │ 0 │
│ MaxPool │ 4 │ 4 │
│ Pad │ 4 │ 0 │
│ Relu │ 18 │ 18 │
│ Reshape │ 8 │ 0 │
│ Shape │ 10 │ 0 │
│ Slice │ 4 │ 0 │
│ Squeeze │ 1 │ 1 │
│ Sub │ 15 │ 0 │
│ Transpose │ 4 │ 0 │
│ Unsqueeze │ 24 │ 0 │
│ Model Size │ 118.6MiB │ 118.6MiB │
└───────────────────┴────────────────┴──────────────────┘
Model optimizing complete!
Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!
Model loaded ========================================================================
Model convertion started ============================================================
INFO: input_op_name: input.1 shape: [1, 92, 64, 64] dtype: float32
INFO: onnx_op_type: Conv onnx_op_name: Conv_2
INFO: input_name.1: input.1 shape: [1, 92, 64, 64] dtype: float32
INFO: input_name.2: onnx::Conv_453 shape: [64, 92, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_454 shape: [64] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.4 shape: [1, 64, 64, 64] dtype: float32
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: input.1 shape: (1, 64, 64, 92) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 92, 64) dtype: float32
INFO: input.3.bias: shape: (64,) dtype: float32
INFO: input.4.strides: val: [1, 1]
INFO: input.5.dilations: val: [1, 1]
INFO: input.6.padding: val: SAME
INFO: input.7.group: val: 1
INFO: output.1.output: name: tf.math.add/Add:0 shape: (1, 64, 64, 64) dtype: <dtype: 'float32'>
INFO: onnx_op_type: Relu onnx_op_name: Relu_3
INFO: input_name.1: input.4 shape: [1, 64, 64, 64] dtype: float32
INFO: output_name.1: onnx::Conv_123 shape: [1, 64, 64, 64] dtype: float32
INFO: tf_op_type: relu
INFO: input.1.features: name: tf.math.add/Add:0 shape: (1, 64, 64, 64) dtype: <dtype: 'float32'>
INFO: output.1.output: name: tf.nn.relu/Relu:0 shape: (1, 64, 64, 64) dtype: <dtype: 'float32'>
:
saved_model output started ==========================================================
saved_model output complete!
Estimated count of arithmetic ops: 6.443 G ops, equivalently 3.221 G MACs
Float32 tflite output complete!
Estimated count of arithmetic ops: 6.443 G ops, equivalently 3.221 G MACs
Float16 tflite output complete!