TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Create onnx graph throws AttributeError: 'Variable' object has no attribute 'values'

Open SEHAIRIKamal opened this issue 3 years ago • 8 comments

Problem description

Hi All, I am trying to build a TensorRT engine from TF2 Object detection API SSD MobileNet v2 320x320. I followed https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api I successfully exported the TensorFlow model with float_image_tensor as the input type. However, when I try to create the onnx graph using create_onnx.py script, an error finishes the process showing that ‘Variable' object has no attribute 'values'. The full report is shown below Any help is very appreciated, thanks in advance.

System information

numpy=1.22.3 Pillow 9.0.1 TensorRT = 8.4.0.6 TensorFlow 2.8.0 object detection 0.1 pycuda=2021.1 onnx=1.11.0 onnxruntime=1.11.0 tf2onnx==1.10.0 onnx-graphsurgeon==0.3.10 Windows 10

Steps to reproduce

  1. Download the SSD MobileNet v2 320x320 from TensorFlow 2 model zoo
  2. Export saved model with float image tensor as input type
cd /path/to/models/research/object_detection
python exporter_main_v2.py \
    --input_type float_image_tensor \
    --trained_checkpoint_dir /path/to/ssd_mobilenet_v2_320x320_coco17_tpu-8/checkpoint \
    --pipeline_config_path /path/to/ssd_mobilenet_v2_320x320_coco17_tpu-8/pipeline.config \
    --output_directory /path/to/export
  1. Create ONNX Graph
python create_onnx.py \
    --pipeline_config /path/to/exported/pipeline.config \
    --saved_model /path/to/exported/saved_model \
    --onnx /path/to/save/model.onnx

Output report

python create_onnx.py --pipeline_config C:/Tensorflow/data/models/newModelSSDMobilenetv2_300/pipeline.config --saved_model C:/Tensorflow/data/models/newModelSSDMobilenetv2_300/saved_model --onnx C:/Tensorflow/data/models/newModelSSDMobilenetv2_300/model.onnx
C:\Tensorflow\venv\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
C:\Tensorflow\venv\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
C:\Tensorflow\venv\lib\site-packages\numpy\.libs\libopenblas.WCDJNK7YVMPZQ2ME2ZZHJJRJ3JIKNDB7.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
INFO:tf2onnx.tf_loader:Signatures found in model: [serving_default].
INFO:tf2onnx.tf_loader:Output names: ['detection_anchor_indices', 'detection_boxes', 'detection_classes', 'detection_multiclass_scores', 'detection_scores', 'num_detections', 'raw_detection_boxes', 'raw_detection_scores']
WARNING:tensorflow:From C:\Tensorflow\venv\lib\site-packages\tf2onnx\tf_loader.py:711: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
WARNING:tensorflow:From C:\Tensorflow\venv\lib\site-packages\tf2onnx\tf_loader.py:711: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:ModelHelper:Loaded saved model from C:\Tensorflow\data\models\newModelSSDMobilenetv2_300\saved_model
INFO:tf2onnx.tfonnx:Using tensorflow=2.8.0, onnx=1.11.0, tf2onnx=1.10.0/07e9e0
INFO:tf2onnx.tfonnx:Using opset <onnx, 11>
INFO:tf2onnx.tf_utils:Computed 4 values for constant folding
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_4
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_5
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_8
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/Select_1
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.tf_utils:Computed 0 values for constant folding
INFO:tf2onnx.optimizer:Optimizing ONNX model
INFO:tf2onnx.optimizer:After optimization: BatchNormalization -53 (60->7), Cast -481 (2037->1556), Const -451 (3381->2930), Gather +7 (488->495), Identity -199 (199->0), Less -2 (99->97), Mul -2 (504->502), Placeholder -9 (18->9), Reshape -17 (405->388), Shape -8 (216->208), Slice -7 (427->420), Squeeze -22 (342->320), Transpose -272 (293->21), Unsqueeze -166 (478->312)
INFO:ModelHelper:TF2ONNX graph created successfully
INFO:ModelHelper:Model is ssd_mobilenet_v2_keras
INFO:ModelHelper:Height is 300
INFO:ModelHelper:Width is 300
INFO:ModelHelper:First NMS score threshold is 9.99999993922529e-09
INFO:ModelHelper:First NMS iou threshold is 0.6000000238418579
INFO:ModelHelper:First NMS max proposals is 100
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
INFO:ModelHelper:ONNX graph input shape: [1, 300, 300, 3] [NCHW format set]
INFO:ModelHelper:Found Conv node 'StatefulPartitionedCall/ssd_mobile_net_v2_keras_feature_extractor/model/Conv1/Conv2D' as stem entry
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
[W] Inference failed. You may want to try enabling partitioning to see better results. Note: Error was:
This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
INFO:ModelHelper:Found Concat node 'StatefulPartitionedCall/concat_1' as the tip of BoxPredictor/ConvolutionalClassHead_
INFO:ModelHelper:Found Squeeze node 'StatefulPartitionedCall/Squeeze' as the tip of BoxPredictor/ConvolutionalBoxHead_
Traceback (most recent call last):
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 673, in <module>
    main(args)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 649, in main
    effdet_gs.process_graph(args.first_nms_threshold, args.second_nms_threshold)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 622, in process_graph
    self.graph.outputs = first_nms(-1, True, first_nms_threshold)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 486, in first_nms
    anchors_tensor = self.extract_anchors_tensor(box_net_split)
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 312, in extract_anchors_tensor
    anchors_y = get_anchor(0, "Add")
  File "C:\Tensorflow\tensorflow_object_detection_api\create_onnx.py", line 301, in get_anchor
    if (node.inputs[1].values).size == 1:
AttributeError: 'Variable' object has no attribute 'values'

SEHAIRIKamal avatar Jun 24 '22 22:06 SEHAIRIKamal

@rajeevsrao ^ ^

zerollzeng avatar Jun 25 '22 13:06 zerollzeng

@SEHAIRIKamal could you please check if using TensorFlow 2.5 (as prescribed in the README) fixes the issue?

cc @shuyuelan

rajeevsrao avatar Jun 25 '22 15:06 rajeevsrao

I was just testing it a couple days ago and everything was working for me. Looking at version of libraries you've posted I think this is the culprit here most likely. You see, when versions of tf2onnx change, they might do a Tensorflow -> ONNX conversion differently. Especially node naming and numbering can be off, hence it'll break the converter, because converter heavily relies on specific nodes and finds them by name. @SEHAIRIKamal can you try the same thing but instead of your current libraries, can you install the version that are listed under requirements.txt, just run pip install -r requirements.txt. Please reach out if it still doesn't work

azhurkevich avatar Jun 25 '22 16:06 azhurkevich

@azhurkevich @zerollzeng @rajeevsrao Thank you a lot for your help I will install the same libraries version and I get you back

SEHAIRIKamal avatar Jun 25 '22 17:06 SEHAIRIKamal

Hi all, I have installed the same libraries version on Windows, and it worked correctly. To build TensorRT engine on Jetson Nano, you have to upgrade your Jetpack to 4.6.1 or 4.6.2 since they come with TensorRT 8.2.1. However, re-exporting the TensorFlow or creating the onnx model can not be done on the Jetson Nano because we can not install TensorFlow 2.5 on the Jetpack 4.6.1 & 4.6.2 . With TensorFlow 2.7 it throws the same error mentioned in the title. Will Nvidia provides the TensorFlow 2.5 python 3.6 Linux Arm arch for these jetpacks. https://developer.download.nvidia.com/compute/redist/jp/v461/tensorflow/ Is there an inference code that can be used with Web camera or the RPi camera, the infer code uses PIL library? Thanks in advance

SEHAIRIKamal avatar Jul 12 '22 17:07 SEHAIRIKamal

@SEHAIRIKamal So you definitely should not try to convert ONNX on Jetson, this will be a source of a lot of headaches related to libraries compatibility with ARM. We highly recommend converting ONNX on your computer -> taking that resulted ONNX and using it on Jetson device to build TRT engine from ONNX. ONNX models are portable, TRT engines are not (between different architectures).

TensorRT above 8.0.1 should work, so JetPack with TRT version that will be > 8.0.1 should be ok.

"Is there an inference code that can be used with Web camera or the RPi camera the infer code uses PIL library? ", not my area of expertise, maybe filing a separate bug to ask might be a good idea.

azhurkevich avatar Jul 12 '22 18:07 azhurkevich

@SEHAIRIKamal you'll probably have to use DeepStream for real time video inference. Not sure how PIL is supposed to play with it though.

azhurkevich avatar Jul 12 '22 18:07 azhurkevich

Hi @zerollzeng Thank you for your comments; this is what I have done, I have generated the ONNX model on my PC. Then, I used this ONNX model to generate the TensorRT engine on the Jetson Nano, and it works correctly. For video testing, I will try to convert the existing code with OpenCV instead of PIL to be able to read images from the camera. I will givetry to deepstream also. Thanks again

SEHAIRIKamal avatar Jul 14 '22 15:07 SEHAIRIKamal

closing since no activity for a long time, thanks all!

ttyio avatar Nov 23 '23 00:11 ttyio