tensorflow-onnx
tensorflow-onnx copied to clipboard
I successfully converted the model from pb to onnx, but failed to load it
I use command:
python -m tf2onnx.convert
--graphdef pb_path
--output onnx_path
--inputs images:0
--outputs rnn_output:0
--opset 15
log:
Use tf.compat.v1.graph_util.extract_sub_graph
INFO:tensorflow:Froze 0 variables.
2021-12-16 15:46:56,407 - INFO - Froze 0 variables.
INFO:tensorflow:Converted 0 variables to const ops.
2021-12-16 15:46:56,419 - INFO - Converted 0 variables to const ops.
2021-12-16 15:46:57,068 - INFO - Using tensorflow=2.1.0, onnx=1.10.2, tf2onnx=1.9.3/1190aa
2021-12-16 15:46:57,068 - INFO - Using opset <onnx, 15>
2021-12-16 15:46:59,032 - INFO - Computed 0 values for constant folding
2021-12-16 15:47:00,480 - INFO - Optimizing ONNX model
2021-12-16 15:47:02,231 - INFO - After optimization: BatchNormalization -20 (20->0), Cast -5 (19->14), Const -129 (224->95), Identity -68 (90->22), Reshape -7 (10->3), Transpose -140 (144->4)
2021-12-16 15:47:02,295 - INFO -
2021-12-16 15:47:02,295 - INFO - Successfully converted TensorFlow model pb_path to ONNX
2021-12-16 15:47:02,295 - INFO - Model inputs: ['images:0']
2021-12-16 15:47:02,295 - INFO - Model outputs: ['rnn_output:0']
2021-12-16 15:47:02,295 - INFO - ONNX model is saved at onnx_path
When run codes: model = onnx.load(onnx_path) onnx.checker.check_model(model)
I get error:
Traceback (most recent call last):
File "tool/onnx_test.py", line 50, in
==> Context: Bad node spec for node. Name: deep_bidirectional_lstm/bidirectional_rnn/fw/fw/TensorArrayV2 OpType: TensorListReserve
My model structure is CRNN. The CNN part can be loaded successfully.
Do you use the latest version of tf2onnx? It is better if you can offer a repro script code.
Do you use the latest version of tf2onnx? It is better if you can offer a repro script code.
Hi, thank you for your reply!
I install it from pypi.
print(tf2onnx.__version__) 1.9.3
Version information: onnx:1.10.2 onnxruntime :1.9.0
the test code: import onnx import onnxruntime as ort onnx_file = '../612336_rnnoutput.onnx'
model = onnx.load(onnx_file) onnx.checker.check_model(model)
so = ort.SessionOptions() session = ort.InferenceSession(onnx_file,so,)
inname = [input.name for input in session.get_inputs()] outname = [output.name for output in session.get_outputs()]
print("inputs name:",inname,"|| outputs name:",outname)
I find a similar issue in onnx, https://hub.fastgit.org/onnx/onnx/issues/3172 tensorflow-onnx does support TensorListReserve, TensorListGetItem, TensorListSetItem, TensorListStack and TensorListFromTensor. But in https://hub.fastgit.org/onnx/onnx/blob/master/docs/Operators.md, I can't find TensorListReserve, TensorListGetItem, TensorListSetItem, TensorListStack and TensorListFromTensor. So how does onnx or onnxruntime load the tf2onnx converted model?
Do you use the latest version of tf2onnx? It is better if you can offer a repro script code.
Hi, thank you for your reply! I install it from pypi.
print(tf2onnx.__version__) 1.9.3
Version information: onnx:1.10.2 onnxruntime :1.9.0
the test code: import onnx import onnxruntime as ort onnx_file = '../612336_rnnoutput.onnx'
model = onnx.load(onnx_file) onnx.checker.check_model(model)
so = ort.SessionOptions() session = ort.InferenceSession(onnx_file,so,)
inname = [input.name for input in session.get_inputs()] outname = [output.name for output in session.get_outputs()]
print("inputs name:",inname,"|| outputs name:",outname)
Thanks, could you also provide the
pb_path
? Then I can repro this issue in my dev env.
Thanks, could you also provide the pb_path
? Then I can repro this issue in my dev env.
I'm sorry, I've been a little busy recently. I don't have Google drive. Can you open Baidu Netdisk? https://pan.baidu.com/s/1wZAf9rA4fjL7B1PnDW-k-A) password:9dg0
input is (1,None,None,3) float32,my suggested size is (1, 32,1024, 3).
I'm sorry, I've been a little busy recently. I don't have Google drive. Can you open Baidu Netdisk? https://pan.baidu.com/s/1wZAf9rA4fjL7B1PnDW-k-A) password:9dg0
input is (1,None,None,3) float32,my suggested size is (1, 32,1024, 3).
Seems this link is out of data.. Could you just post it in the issue?
It's been over 3 months, so closing this. Feel free to open a new one if the issue still exists.