onnx-tensorflow
onnx-tensorflow copied to clipboard
Can't convert onnx to tf format on Google Colab
I trained a custom YOLOv5s model on Google Colab and I'm trying to convert the onnx file to a tf file so that I can subsequently convert it to a tflite file for an android app. However, I'm getting an error 'BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.' I'm completely new to this and have been referring to tutorials so far. The code is attached here.
`import onnx import tensorflow as tf import numpy as np !pip install onnx_tf from onnx_tf.backend import prepare onnx_model = onnx.load('/content/yolov5/runs/train/exp/weights/best.onnx') tf_rep = prepare(onnx_model) pb_path = "/content/yolov5/runs/train/exp/weights.pb" tf_rep.export_graph(pb_path)
assert os.path.exists(pb_path) print(".pb model converted successfully.")`
Unsqueeze for opset 13 is now supported. You need to do a build from source. Do a git clone and run pip install -e .
@chinhuang007, when will it be released on pypi.org with this support?
@axlecky I found the solution. If you use PyTorch, should build onnx model with opset_version = 12 like this:
data_input = torch.zeros(net.input_shape)
data_input = data_input[None,:]
torch.onnx.export(
net.model,
data_input,
onnx_model_path,
opset_version = 12,
input_names = ['input'],
output_names = ['output'],
verbose=False
)
And it works on onnx-tf 1.10.0 version from pypi.org
@ildar-ceo I am converting a simple text classification model from PyTorch to onnx to TensorFlow. PyTorch to Onyx is working fine, but I cannot convert from onyx to TF on google collar. I had specified opset_version = 12, and onnx to TF still does not work. This my traceback:
TypeError: No common supertype of TensorArraySpec(TensorShape([64]), tf.float32, True, True) and TensorArraySpec(TensorShape([64]), tf.float32, None, True).
Hi, I am facing the same issue, but I can't fix it with your method. I saved the pytorch model with this code, specifying the opset version
torch.onnx.export( model, # PyTorch Model sample_input, # Input tensor 'EQT.onnx', # Output file name input_names=['input'], # Input tensor name (arbitary) output_names=['output'], # Output tensor name (arbitary) opset_version=12 )
and then convert to Tensorflow
onnx_model = onnx.load("EQT.onnx") # load onnx model tf_rep = prepare(converted_model) # prepare tf representation tf_rep.export_graph("./EQT") # export the model
but I still get error BackendIsNotSupposedToImplementIt: Unsqueeze version 13 is not implemented.
I am using onnx-tf 1.10.0 from pypi