onnx-tensorflow
onnx-tensorflow copied to clipboard
ONNX to TF conversion is removing information such as the name of operators and input/output nodes
Describe the bug
When i try to convert an onnx model to tensorflow a graph is generated which is missing key information such as the names of the input and output nodes for the model. In addition to this I lose information of the names of operators such as Conv
, Gemm
etc.
To Reproduce Instructions to reproduce the problem:
I am trying to convert a proprietary model at work but for now we can just use mobilenetv2-7.onnx to reproduce the issue.
The code I am using is:
import onnx
from onnx_tf.backend import prepare
filename = 'mobilenetv2-7.onnx'
out_dir = 'saved_model'
onnx_model = onnx.load(filename) # load onnx model
tf_rep = prepare(onnx_model, gen_tensor_dict=True) # prepare tf representation
tf_rep.export_graph(out_dir)
Now, if you upload the onnx model and the generated saved_model/saved_model.pb
into seperate Netron windows you can see that in the generated model's graph there is no information for the output node. There is no obvious input node. Also, any operators that you can find within the generated model's graph do not retain the name of the operators from the onnx graph such as Conv
, Gemm
etc.
I have tried the conversion on a local Ubuntu machine
and an AWS Sagemaker ml.p2.xlarge
instance and I get the same results.
ONNX model file Model: mobilenetv2-7.onnx
Python, ONNX, ONNX-TF, Tensorflow version python==3.8.10 onnx==1.10.1 onnx-tf==1.9.0 tensorflow==2.6.0
Additional context I'm not sure whether this is useful information but my motivation for this is to have a tensorflow model which we can then convert to tflite which will end up being the model we run. Maybe you've encountered this issue before @chinhuang007 ?
See this as well but only on tflite step.
My (initialy pytorch converted) onnx model has 3 outputs with names "b", "a", "c".
TF model output (tf_model(**input_dict)
) is {"b": tensor1, "a": tensor2, "c": tensor3 }
But tflite exported model doesn't have info about outputs. They are called Identity1
,Identity2
, Identity3
for some reason and I don't know how to map them to "b", "a", "c".
Python, ONNX, ONNX-TF, Tensorflow version python==3.7.10 onnx==1.10.2 onnx-tf==1.9.0 tensorflow==2.6.0
Sorry for the late response. The exported TF saved model has the output names as you see by adding a couple lines:
m = tf.saved_model.load(out_dir)
print('loaded model outputs = ', m.signatures['serving_default'].structured_outputs)
I am not familiar with tflite conversion therefore can't comment on that.
@chinhuang007 TF model is indeed converted right. Maybe should lurk for bug in TF-Lite repo.
@meandmymind I am experiencing the same issue. Did you find a lead on this?
I am able to get the correct output from the TF model using @chinhuang007's method. However converting to TFLite seems to lose this information. The input names are correct but the output names are: PartitionedCall:0, PartitionedCall:1, PartitionedCall:N
Currently the only way to interpret the output tensors is from a hardcoded lookup where I have manually mapped the PartitionedCall:N
to the actual output name. This is an issue since the conversion seems to be somewhat random and needs manual inspection (using Netron) to decode for each model architecture.
Environment python==3.8.12 onnx==1.10.2 onnx-tf==1.9.0 tensorflow==2.7.0
@jamjambles in my case I had outputs sorted lexicographically by output names :DDD
e.g. my torch model output [1, 2, 3] (corresponds to names ["b", "a", "c"]). tensorflow output {"b": 1, "a": 2, "c": 3}. tflite output [2, 1, 3] (corresponds to names ["a", "b", "c"]).
so in my script I map tflite outputs to sorted names.
Easily correctable. https://github.com/PINTO0309/tflite2json2tflite