q yao

Results 318 comments of q yao

Hi, @cefengxu I have update the code (all three repo). Batch input support has be added on some models (tested on faster rcnn, double head rcnn, cascade rcnn, retinanet etc)....

Hi, thanks for using this project. About the question: 1. I am not so sure. Guess it is caused by amirstan_plugin, are you using the latest version? Try rebuild it...

>because I compared the size before and after use, and the size after using fp16 is still larger than that in mmdetction. That is expected, TRT is used to speedup...

Thanks for the report. I will have a test.

Can you update the convert tools (torch2trt_dynamic, amirstan_plugin, mmdetection-to-tensorrt) and try again? The latest version is 0.5.0. There are a lot of changes include bug fixing.

Theoretically, TensorRT7.2.2.3 and mmdetection 2.12 are supported.

Hi Mask support is a experiment feature, still under develop. Better support include int8/grid_sampling will be published in future version.

TridentNet has not been supported yet.

Hi Batch inference works on most models. Just set the opt_shape_param: ``` opt_shape_param=[ [ [1,3,320,320], [1,3,800,1312], [4,3,1344,1344], ] ] ``` should give you a model with batch support (max_batch_size=4). And...

@heboyong @jinfagang I have create a QQ group: 1107959378. Join if you want to discuss or participate this project.