TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Detectron2 model convert to TensorRT

Open DavidNguyen95 opened this issue 3 years ago • 6 comments

I follow your sample here that support convert detectron2 model to TensorRT : https://github.com/NVIDIA/TensorRT/tree/main/samples/python/detectron2

I share my code on colab here https://colab.research.google.com/drive/1L7OcEkyetcqZeDB-TmiH0M85TOQ7ge6t#scrollTo=1zwdEdx8Em0K

I stuck at convert from caffe to onnx graph

Error here:

Traceback (most recent call last): File "TensorRT/samples/python/detectron2/create_onnx.py", line 659, in <module> main(args) File "TensorRT/samples/python/detectron2/create_onnx.py", line 640, in main det2_gs.process_graph(anchors, args.first_nms_threshold, args.second_nms_threshold) File "TensorRT/samples/python/detectron2/create_onnx.py", line 628, in process_graph box_head_outputs, mask_head_output = roi_heads(rpn_outputs, p2, p3, p4, p5, second_nms_threshold) File "TensorRT/samples/python/detectron2/create_onnx.py", line 545, in roi_heads first_box_head_gemm.inputs[0] = box_pooler_reshape[0] AttributeError: 'NoneType' object has no attribute 'inputs'

Do you have any suggestion?

Other thing, I would convert to TRT then apply on Jetson Xavier NX. Do I need convert TensorRT Engine directly on Jetson Xavier Nx ? I mean if I convert .trt file on colab or RTX , will that .trt work on Jetson?

DavidNguyen95 avatar Jul 11 '22 16:07 DavidNguyen95

Do you have any suggestion?

@kevinch-nv ^ ^

Other thing, I would convert to TRT then apply on Jetson Xavier NX. Do I need convert TensorRT Engine directly on Jetson Xavier Nx ? I mean if I convert .trt file on colab or RTX , will that .trt work on Jetson?

You have to convert the engine On Jetson.

zerollzeng avatar Jul 12 '22 11:07 zerollzeng

Hello, Do you have any update? . I am so confuse in your sample, You convert onnx to onnx graph using (onnx_graphsurgeon) and then convert to TRT. why don't we convert ONNX to TensorRT directly? What extactly onnx_graphsurgeon do in this case?

DavidNguyen95 avatar Jul 14 '22 12:07 DavidNguyen95

@DavidNguyen95 Because ONNX that we get out of Caffe2 is filled with Caffe2 operations which are not compatible with TRT. You can visualize graph with netron and see for yourself. As a result we need to graph surge the graph to get rid of Caffe2 ops and replace them with TRT compatible ops.

I cannot follow your codelab link, probably expired or smth.

With regards to Jetson. I have not tested this sample with Jetson. However, it will probably work. You need to create ONNX on your machine. Take that final ONNX, copy it to Jetson and build TRT engine on a Jetson device.

Error that you face looks like when graph surging incorrect node was selected. Most likely you are not using exactly the same version of public Detectron 2 Mask R-CNN R50-FPN 3x, you either selected a different model or modified it. As a result, node selection got messed up and it didn't convert. Please share you ONNX so that we can take a look at it. Thanks.

azhurkevich avatar Jul 18 '22 17:07 azhurkevich

@azhurkevich . First, thank you.

  1. About colab, I still set public mode, please copy link and paste to new tab in your browser. If you click the link from github directly , sometime it will not works. https://colab.research.google.com/drive/1L7OcEkyetcqZeDB-TmiH0M85TOQ7ge6t?usp=sharing

  2. Convert ONNX graph : My model is Mask R-CNN R50-FPN 3x but it not works. Then I tried to follow your work that use version publish Detectron2 Mask R-CNN R50-FPN 3x . But it is the same error.
    Anyway, I tried to bug publish version, I print the self.graph.nodes, then modified some node's name in find_node_by_op_input_output_name to pass the convert onnx graph step. But I am not sure I modify correctly.

  3. TRT build engine: I use onnx graph in above step. In jetson xaivier NX, jetpack 4.4 DP , tensorRT 7. I use deepstream5 container (docker pull nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples) to convert ONNX to TRT. But is has error :

[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [TensorRT] INFO: ModelImporter.cpp:135: No importer registered for op: ResizeNearest_TRT. Attempting to import as plugin. [TensorRT] INFO: builtin_op_importers.cpp:3659: Searching for plugin: ResizeNearest_TRT, plugin_version: 1, plugin_namespace: [TensorRT] INFO: builtin_op_importers.cpp:3676: Successfully created plugin: ResizeNearest_TRT [TensorRT] INFO: ModelImporter.cpp:135: No importer registered for op: ResizeNearest_TRT. Attempting to import as plugin. [TensorRT] INFO: builtin_op_importers.cpp:3659: Searching for plugin: ResizeNearest_TRT, plugin_version: 1, plugin_namespace: [TensorRT] INFO: builtin_op_importers.cpp:3676: Successfully created plugin: ResizeNearest_TRT [TensorRT] INFO: ModelImporter.cpp:135: No importer registered for op: ResizeNearest_TRT. Attempting to import as plugin. [TensorRT] INFO: builtin_op_importers.cpp:3659: Searching for plugin: ResizeNearest_TRT, plugin_version: 1, plugin_namespace: [TensorRT] INFO: builtin_op_importers.cpp:3676: Successfully created plugin: ResizeNearest_TRT [TensorRT] INFO: ModelImporter.cpp:135: No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin. [TensorRT] INFO: builtin_op_importers.cpp:3659: Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace: [TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin EfficientNMS_TRT version 1 ERROR:EngineBuilder:Failed to load ONNX file: /opt/nvidia/deepstream/deepstream-5.0/ds/convertedfp32.onnx ERROR:EngineBuilder:In node -1 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"

I hope can receive your feedback. Thank you.

DavidNguyen95 avatar Jul 18 '22 19:07 DavidNguyen95

@DavidNguyen95 you are not following requirements. You should get latest TensorRT 8.4.1. I don't think we have public containers with TensorRT 8.4.1, so you'll have to install it manually in a dockerfile. Please read requirements and README accurately, 90% of the issues come from the fact that people do not read instructions accurately.

Also, I do not recommend tinkering with nodes unless you are 100% sure what you are doing. Otherwise you will connect wrong parts of the graph and it will not work. Public Detectron2 Mask R-CNN R50-FPN 3x should work as is, I was testing it last week.

azhurkevich avatar Jul 18 '22 19:07 azhurkevich

@DavidNguyen95 On colab, your dimensions are wrong [1, 3, 768, 1344] [NCHW format set]. You didn't use 1344x1344 image when you were exporting the model. Please read instructions.

azhurkevich avatar Jul 18 '22 19:07 azhurkevich

closing since no activity for more than 3 weeks, thanks

ttyio avatar Feb 08 '23 01:02 ttyio