mmdetection-to-tensorrt
mmdetection-to-tensorrt copied to clipboard
Multiple batch for only one inference
how to process more than one images ( for example 2 or batch_size == 2 ) on one inference when I using mmdetection-to-tensorrt ?
So said, this repo do not support batched input for now. Batch support are on the top of my ToDoList. Will be added soon.
So said, this repo do not support batched input for now. Batch support are on the top of my ToDoList. Will be added soon.
Great works!!
Hi, @cefengxu I have update the code (all three repo). Batch input support has be added on some models (tested on faster rcnn, double head rcnn, cascade rcnn, retinanet etc). Just set the opt_shape_param like follow:
opt_shape_param=[
[
[1,3,320,320], # min shape
[2,3,800,1344], # optimize shape
[4,3,1344,1344], # max shape
]
]
As long as opt_shape_param[0][2][0]==4, it should give you batch size up to 4. Not all model support batch input now. It takes times. Still working on it.
@sunpeng981712364 Thank you. So glad to hear that.
Cool ... i will try it ASAP ~!