yolort
yolort copied to clipboard
Support dynamic shape mechanism for TensorRT
🐛 Describe the bug
error:
Error Code 4: Internal Error (Network must have at least one output)
I can visualize the yolov5 outputs using netron, but go trough all onnx model can not detection outputs which marked as output, And tensorrt parsing it error.
However, if using onnx2trt
it seems can parse,
Versions
please help
Hi @luohao123 ,
Thanks for creating this ticket, What is the TensorRT version you are using now? We are only support TensorRT 8.0+ now. And could you supply a more detailed example to reproduce this error so that we could debug?
@zhiqwang I think I forget to initPlugin when parsing trt since it contains BatchedNMSPlugin.
However, I actually want set the model input width height when convert, rather than export. So I set enable_dynamic to be True, but got error like this:
[TensorRT] INFO: Searching for plugin: BatchedNMS_TRT, plugin_version: 1, plugin_namespace:
[TensorRT] WARNING: builtin_op_importers.cpp:4779: Attribute scoreBits not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[TensorRT] INFO: Successfully created plugin: BatchedNMS_TRT
[TensorRT] ERROR: batched_nms: PluginV2Layer must be V2DynamicExt when there are runtime input dimensions.
How to change the model input width and height after convert onnx?
Hi @luohao123 ,
I actually want set the model input width height when convert, rather than export.
Got it, currently we don't support the dynamic shape mechanism for TensorRT in yolort
, we will implement this feature as soon as possible, and PR for this is welcome here.
Hi @zhiqwang. Is there any update on the dynamic shape mechanism for TensorRT?