TensorRT
                                
                                 TensorRT copied to clipboard
                                
                                    TensorRT copied to clipboard
                            
                            
                            
                        Conversion Error for IsInf OP
We converted the ONNX model with IsInf OPs using TPAT and it succeeded. We noticed that the IsInf OP is implemented by tpat_ininf and Cast OP. When we convert the ONNX to TensorRT model, the error happen as follow:
onnx2trt.py:29: DeprecationWarning: Use set_memory_pool_limit instead. config.max_workspace_size =( 1 << 20 ) * 3 * 1024 Loading ONNX file from path /home/tensorrt/model_testing-sim.onnx... Beginning ONNX file parsing [08/16/2022-10:10:18] [TRT] [W] onnx2trt_utils.cpp:363: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. raw shape of 0 is: (6, 3, 928, 1600) Completed parsing of ONNX file Building an engine from file /home/tensorrt/model_testing-sim.onnx; this may take a while... onnx2trt.py:54: DeprecationWarning: Use build_serialized_network instead. engine = builder.build_engine(network,config) [08/16/2022-10:11:00] [TRT] [E] 1: [castBuilder.cpp::addSupportedFormats::117] Error Code 1: Internal Error (Cast output type does not support bool.) Completed creating Engine Traceback (most recent call last): File "onnx2trt.py", line 57, in f.write(engine.serialize()) AttributeError: 'NoneType' object has no attribute 'serialize'
How can I solve this issue?
Looks like an issue of TPAT? TRT doesn't support the IsInf operator now, so it should be implemented as a plugin.
Looks like an issue of TPAT? TRT doesn't support the IsInf operator now, so it should be implemented as a plugin.
Thanks for answering!
We communicated with TPAT team. We were using the same ONNX model file only with IsInf OP and same Plugin library. They can successfully converted with TenrorRT 8.0.1. We failed in our side and we are using Jetpack 5.0.1 TensorRT 8.4.0 on NVIDIA AGX ORIN. Is it the TenorRT version cause the problem?
Please refer TPAT Issue
get start
onnx2trt.py:31: DeprecationWarning: Use set_memory_pool_limit instead.
Loading ONNX file from path /home/dms/Codes/xuhan/DETR3D/model-tpat.onnx...
Beginning ONNX file parsing
raw shape of 0 is:  (1, 64)
Completed parsing of ONNX file
Building an engine from file /home/dms/Codes/xuhan/DETR3D/model-tpat.onnx; this may take a while...
onnx2trt.py:54: DeprecationWarning: Use build_serialized_network instead.
[08/22/2022-17:20:15] [TRT] [E] 1: [castBuilder.cpp::addSupportedFormats::117] Error Code 1: Internal Error (Cast output type does not support bool.)
Completed creating Engine
Traceback (most recent call last):
  File "onnx2trt.py", line 57, in <module>
AttributeError: 'NoneType' object has no attribute 'serialize'
Can you share your onnx model and generated plugin code? Looks like this is why it failed, I would guess there is an unsupported cast operation in your model.
[08/22/2022-17:20:15] [TRT] [E] 1: [castBuilder.cpp::addSupportedFormats::117] Error Code 1: Internal Error (Cast output type does not support bool.)
We had some problem upload the files here. But the ONNX Nodes info are shown below and the plugin .so file was generated by TPAT. I am not sure if there is a problem when the dtype of input for Cast OP is None.
Graph torch-jit-export (Opset: 10)
Inputs: [Variable (0): (shape=[1, 64], dtype=float32)]
Nodes:
IsInf_0 (tpat_IsInf_0)
        Inputs: [
                Variable (0): (shape=[1, 64], dtype=float32)
        ]
        Outputs: [
                Variable (cast_back_for_3:0): (shape=None, dtype=None)
        ]
cast_back_for_3 (Cast)
        Inputs: [
                Variable (cast_back_for_3:0): (shape=None, dtype=None)
        ]
        Outputs: [
                Variable (3): (shape=[1, 64], dtype=bool)
        ]
Attributes: {'to': 9}
Outputs: [Variable (3): (shape=[1, 64], dtype=bool)]
We solved the issue by adding another Cast layer to firstly cast the output of tpat_inf into int32, and then cast into bool. Is it because in TensorRT, Cast OP does not support the None dtype input when the output is bool?
Graph after inserting Cast nodes Graph torch-jit-export (Opset: 10)
Inputs: [Variable (0): (shape=[1, 64], dtype=float32)]
Nodes:
IsInf_0 (tpat_IsInf_0)
        Inputs: [
                Variable (0): (shape=[1, 64], dtype=float32)
        ]
        Outputs: [
                Variable (cast_back_0_for_3:0): (shape=None, dtype=None)
        ]
cast_back_0_for_3 (Cast)
        Inputs: [
                Variable (cast_back_0_for_3:0): (shape=None, dtype=None)
        ]
        Outputs: [
                Variable (cast_back_0_for_3:1): (shape=[1, 64], dtype=<class 'numpy.int32'>)
        ]
Attributes: {'to': 6}
cast_back_1_for_3 (Cast)
        Inputs: [
                Variable (cast_back_0_for_3:1): (shape=[1, 64], dtype=<class 'numpy.int32'>)
        ]
        Outputs: [
                Variable (3): (shape=[1, 64], dtype=bool)
        ]
Attributes: {'to': 9}
Outputs: [Variable (3): (shape=[1, 64], dtype=bool)]
Is it because in TensorRT, Cast OP does not support the None dtype input when the output is bool?
I think so.
closing since no activity for more than 3 weeks, please reopen if you still have question, thanks!