onnx-tensorflow
onnx-tensorflow copied to clipboard
Why does onnx-tensorflow add Transpose layers for each Conv2D layer?
Describe the bug
-
Why does
onnx-tensorflow
add Transpose layers for each Conv2D layer? -
Why does
onnx-tensorflow
use multiple Conv2D layers instead of one GroupedConv2D layer?
It increases inference time, so PB-model that is converted by using onnx-tensorflow
is slower than the same native TF-model.
If I want to convert NCHW-ONNX to NCHW-PB to run on GPU, and then convert NCHW-PB to NHWC-TFLITE, then how should I do this? (Currently tf.lite.TFLiteConverter.from_saved_model(saved_model_export_dir)
allows to convert NCHW-PB to NHWC-TFLITE without adding extra-transpose layers.
To Reproduce
https://colab.research.google.com/gist/AlexeyAB/10b1bf880152b1ad7ca116b428863068/pytorch-onnx-tf-extra-transpose.ipynb
pytorch_onnx_tf_extra_transpose_ipynp.zip
Python, ONNX, ONNX-TF, Tensorflow version
This section can be obtained by running get_version.py
from util folder.
Python version:
3.6.9 (default, Jul 17 2020, 12:50:27)
[GCC 8.4.0]
ONNX version:
1.7.0
ONNX-TF version:
1.6.0
Tensorflow version:
2.4.0-dev20201010
Additional context
PB-model with extra Transpose layers:
Result TFLITE-model with extra Transpose layers:
@AlexeyAB I have the same problem,transpose op increases inference time。Do you have any solutions?
Same with you. Any method to deal with these transpose ops? it cost too much time.
Any update on this issue?
I have the same issue
I have same issue, pytorch(NCHW) --> onnx(NCHW) --> tf-saved-model(NHWC) [transpose layers introduced here].
@AlexeyAB Please suggest if you found solution?
I have same issue, pytorch(NCHW) --> onnx(NCHW) --> tf-saved-model(NHWC) [transpose layers introduced here].
@AlexeyAB Please suggest if you found solution?
I found the solution, perform following to convert form pytorch ---> tflite (NCHW ---> NHWC
):
-
pytorch official export to onnx
[N,C,H,W] ---> [N,C,H,W]
-
onnx conversion to open-vino IR format
[N,C,H,W] ---> [N,C,H,W]
(using openvino docker image) - open-vino-IR format to tensorflow format
[N,C,H,W] ---> [N,H,W,C]
(using openvino2tensorflow :zap: :bulb: docker image, see https://github.com/PINTO0309/openvino2tensorflow.git) -
tensorflow format to tflite
[N,H,W,C] ---> [N,H,W,C]
:smile:
At this time, this tool, which is a greatly improved version of openvino2tensorflow, is much easier and more accurate for conversion. Incidentally, GroupedConvolution
is also supported.
"Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf)." https://github.com/PINTO0309/onnx2tf