mmdeploy
mmdeploy copied to clipboard
how to do batch inference?
for example: mmdeploy_mat_t mat{ img.data, img.rows, img.cols, 3, MMDEPLOY_PIXEL_FORMAT_BGR, MMDEPLOY_DATA_TYPE_UINT8};
mmdeploy_detection_t* bboxes{}; int* res_count{}; status = mmdeploy_detector_apply(detector, &mat, 1, &bboxes, &res_count);
"mmdeploy_detector_apply " how to do batch inference?
how solve this problem??
Hi, can you check if #839 solves your problem?
Hi, can you check if #839 solves your problem? this is python script,how use c++ tensorrt batch inference?
Sorry for the late response. Please refer to PR #986 Also, after getting the SDK model, 'pipeline.json' should be manually updated according to the suggestion in #839, which I quotes as following:
Batch inference in SDK is experimental and must be turned on explicitly in the configuration file. In pipeline.json of the model, insert the field "is_batched": true into the config of the task which module is Net:
{
"name": "yolox",
"type": "Task",
"module": "Net",
"is_batched": true, // <--
"input": ["prep_output"],
"output": ["infer_output"],
"input_map": {"img": "input"}
}
and be aware that after preprocess, images must be of the same size to form a batch.