mmdeploy icon indicating copy to clipboard operation
mmdeploy copied to clipboard

how to do batch inference?

Open machine52vision opened this issue 2 years ago • 4 comments

for example: mmdeploy_mat_t mat{ img.data, img.rows, img.cols, 3, MMDEPLOY_PIXEL_FORMAT_BGR, MMDEPLOY_DATA_TYPE_UINT8};

mmdeploy_detection_t* bboxes{}; int* res_count{}; status = mmdeploy_detector_apply(detector, &mat, 1, &bboxes, &res_count);

"mmdeploy_detector_apply " how to do batch inference?

machine52vision avatar Aug 18 '22 05:08 machine52vision

how solve this problem??

machine52vision avatar Aug 18 '22 05:08 machine52vision

Hi, can you check if #839 solves your problem?

lzhangzz avatar Aug 18 '22 06:08 lzhangzz

Hi, can you check if #839 solves your problem? this is python script,how use c++ tensorrt batch inference?

machine52vision avatar Aug 18 '22 07:08 machine52vision

Sorry for the late response. Please refer to PR #986 Also, after getting the SDK model, 'pipeline.json' should be manually updated according to the suggestion in #839, which I quotes as following:

Batch inference in SDK is experimental and must be turned on explicitly in the configuration file. In pipeline.json of the model, insert the field "is_batched": true into the config of the task which module is Net:

{
    "name": "yolox",
    "type": "Task",
    "module": "Net",
    "is_batched": true,   // <--
    "input": ["prep_output"],
    "output": ["infer_output"],
    "input_map": {"img": "input"}
}
and be aware that after preprocess, images must be of the same size to form a batch.

lvhan028 avatar Sep 02 '22 09:09 lvhan028