tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

Can't convert tensorflow to onnx: __inference_import/TRTEngineOp_6_native_segment_3604 is not an accepted attribute value.

Open poonnatuch opened this issue 1 year ago • 7 comments

I can't convert a tensorflow model to onnx

Question

I'm having trouble converting a TensorFlow model to ONNX. I'm encountering the following error: TypeError: 'name: "__inference_import/TRTEngineOp_6_native_segment_3604" ' is not an accepted attribute value.

Error Traceback:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/convert.py", line 710, in <module>
    main()
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/convert.py", line 273, in main
    model_proto, _ = _convert_common(
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/convert.py", line 168, in _convert_common
    g = process_tf_graph(tf_graph, const_node_values=const_node_values,
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/tfonnx.py", line 459, in process_tf_graph
    main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/tfonnx.py", line 474, in graphs_from_tf
    ordered_func = resolve_functions(tf_graph)
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/tf_loader.py", line 760, in resolve_functions
    _, _, _, _, _, functions = tflist_to_onnx(tf_graph, {})
  File "/opt/conda/lib/python3.8/site-packages/tf2onnx/tf_utils.py", line 462, in tflist_to_onnx
    onnx_node = helper.make_node(node_type, input_names, output_names, name=node.name, **attr)
  File "/opt/conda/lib/python3.8/site-packages/onnx/helper.py", line 163, in make_node
    node.attribute.extend(
  File "/opt/conda/lib/python3.8/site-packages/onnx/helper.py", line 164, in <genexpr>
    make_attribute(key, value)
  File "/opt/conda/lib/python3.8/site-packages/onnx/helper.py", line 885, in make_attribute
    raise TypeError(f"'{value}' is not an accepted attribute value.")
TypeError: 'name: "__inference_import/TRTEngineOp_6_native_segment_3604"
' is not an accepted attribute value.

Command I used:

python -m tf2onnx.convert --graphdef model.pb --inputs import/input_1:0,import/input_2_2:0 --outputs last_concat:0 --output test.onnx --opset 15

What I have tried:

  • Model opset: 13, 14, 15 and 18
  • tf version 2.12 and 2.13.1
  • onnx 1.14 and 1.15.1
  • Attempted conversion using both saved_model and grafdef, but stuck on the same error.

Notes

I'm not very familiar with TensorFlow, so I might be converting the wrong type of model. Normally, I use this model for inference using the following Python code:

modelfile = 'model.pb'
f = gfile.GFile(modelfile, 'rb')
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
f.close()

sess = tf.compat.v1.Session(config=config)
sess.graph.as_default()
tf.import_graph_def(graph_def)

preds = sess.run('last_concat:0', {'import/input_1:0': img1, 'import/input_2_2:0': img2})

poonnatuch avatar Dec 14 '23 09:12 poonnatuch

Is it possible for you to share the model.pb file here?

fatcat-z avatar Dec 18 '23 10:12 fatcat-z

Hi, I have the same problem ,here is my model saved_model.docx

please change .gif to .pb

thomasjeff0420 avatar Dec 25 '23 13:12 thomasjeff0420

Sorry for a very late reply. I can't share a model weights with you. I don't know if this is related to the problem but I think this model's weight is might be optimized with older tensorrt version (older than 8.5) and developed using a very old tensorflow version (atleast 1.x version).

poonnatuch avatar Dec 29 '23 23:12 poonnatuch

Yes, That's what I want. My framework only work on TF1.15. When I tried to convert my model from darknet to TF, I always encounter the RealDiv node, even when I reshape the input. I am very confused now.

thomasjeff0420 avatar Dec 31 '23 03:12 thomasjeff0420

Sorry for a very late reply. I can't share a model weights with you. I don't know if this is related to the problem but I think this model's weight is might be optimized with older tensorrt version (older than 8.5) and developed using a very old tensorflow version (atleast 1.x version).

Understood. If the tf model has to work with 1.15(1.x version), why did you use tf version 2.12 and 2.13.1 in your environment to convert it to an ONNX file?

fatcat-z avatar Jan 02 '24 02:01 fatcat-z

I normally run it (the model) on a newer version of tensorflow (version 2.x), so I guessed that I assume it would be ok to convert the model using the newer environtment. But from your suggestion I guess I'll try to convert it again using lower version of tensorflow.

poonnatuch avatar Jan 16 '24 14:01 poonnatuch

Hey @poonnatuch , did it finally work with your tries ?

RobinGRAPIN avatar Feb 20 '24 14:02 RobinGRAPIN