mmyolo
mmyolo copied to clipboard
Error when export TRT yolov8 with non-square input shape
Prerequisite
- [X] I have searched the existing and past issues but cannot get the expected help.
- [X] I have read the FAQ documentation but cannot get the expected help.
- [X] The bug has not been fixed in the latest version.
🐞 Describe the bug
It throws the error when using the dynamic-shape TRT or static but non-square input, see config below for detail
ERROR - /root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.utils.utils.to_backend` with Call id: 1 failed. exit.
Deploy Config for Dynamic TRT
_base_ = ['mmyolo::deploy/base_dynamic.py']
backend_config = dict(
type='tensorrt',
common_config=dict(fp16_mode=True, max_workspace_size=1 << 40),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 640, 640],
opt_shape=[1, 3, 640, 640],
max_shape=[32, 3, 640, 640])))
])
use_efficientnms = False # whether to replace TRTBatchedNMS plugin with EfficientNMS plugin # noqa E501
or Static-shape TRT with non-square input
_base_ = ['mmyolo::deploy/base_static.py']
onnx_config = dict(input_shape=(640, 800))
backend_config = dict(
type='tensorrt',
common_config=dict(fp16_mode=True, max_workspace_size=1 << 40),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 640, 800],
opt_shape=[1, 3, 640, 800],
max_shape=[1, 3, 640, 800])))
]
)
use_efficientnms = True # whether to replace TRTBatchedNMS plugin with EfficientNMS plugin # noqa E501
Command.
cd /root/workspace/mmdeploy && python tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--test-img ${TEST_IMG} \
--work-dir ${WORK_DIR} \
--device ${DEVICE} \
--log-level INFO \
--dump-info
Environment
Docker
Additional information
It works fine when using static-shape of square input image, e.g (640,640) but have the error when:
- compiling TRT Static-shape but not using square input, for example (480x640)
- compiling TRT Dynamic-shape with square input or non-square input
I check the output deploy.json and the batch_size=1, no matter what I changed in the config. Is this normal?
{
"version": "1.2.0",
"task": "Detector",
"models": [
{
"name": "yolodetector",
"net": "end2end.engine",
"weights": "",
"backend": "tensorrt",
"precision": "FP16",
"batch_size": 1,
"dynamic_shape": true
}
],
"customs": []
}