[Bug] Python SDK Batch Inference not working?
Checklist
- [ ] I have searched related issues but cannot get the expected help.
- [ ] 2. I have read the FAQ documentation but cannot get the expected help.
- [ ] 3. The bug has not been fixed in the latest version.
Describe the bug
An mmdetection model was compiled with static batch size of 3. When using Python Detector API to perform inference on a batch of 3 images, it is failing with RuntimeError: continuous uint8 HWC array expected.
Reproduction
Convert an MMDet model to TensorRT using MMDeploy with a static batch size > 1. Try to run batch inference:
from mmdeploy_runtime import Detector
import cv2
img = cv2.imread('image.jpg')
# create a detector
detector = Detector(model_path='model_path', device_name='cuda', device_id=0)
# run the inference
detector([img, img, img])
Error: RuntimeError: continuous uint8 HWC array expected
Environment
Latest MMDeploy (1.3), MMDet 3.3, Torch 2.3, Python 3.8, linux
Error traceback
`RuntimeError: continuous uint8 HWC array expected`
Perhaps I am doing something wrong? I tried numpy stacking the images which did not help.
Perhaps related to https://github.com/open-mmlab/mmdeploy/issues/2808
I found the Detector class has a batch method so gave that a try:
detector.batch([img, img, img])
But it is passing a single image to the model it seems, which fails due to wrong dims:
[2024-08-14 16:07:31.873] [mmdeploy] [error] [trt_net.cpp:28] TRTNet: 3: [executionContext.cpp::validateInputBindings::2083] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::validateInputBindings::2083, condition: profileMinDims.d[i] <= dimensions.d[i]. Supplied binding dimension [1,3,800,1344] for bindings[0] exceed min ~ max range at index 0, maximum dimension in profile is 3, minimum dimension in profile is 3, but supplied dimension is 1.
The from mmdeploy.apis import inference_model API seems to work if you pass a list to img
The inference_model requires loading the model for each inference, which is not efficient. Is there a way to make it work with the detector without loading the model every time inference_model is called? @matthost
Hello, I have the same issue when using PoseTracker from mmdeploy. Did you find a way to solve this ?