mmdetection-to-tensorrt
mmdetection-to-tensorrt copied to clipboard
Mask(segment) inference error occured.
Hello. I succesfully installed this lib. Thanks for your recomendation.
So i tested simple demo and made my own Tensorrt pth file(checkpoints). My model is customized mask_rcnn_r50_fpn_fp16_1X_coco.
First image demo inference test was succesfull, made trt pth output. But my model is Mask_rcnn. And my output was only b-box withount segment mask.
So i add parameter enable mask=True
Then i found this Error
mask mode require len(output_names)==5 but get output_names=['num_detections', 'boxes', 'scores', 'classes']
/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/models/dense_heads/anchor_head.py:123: UserWarning: DeprecationWarning: anchor_generator is deprecated, please use "prior_generator" instead
warnings.warn('DeprecationWarning: anchor_generator is deprecated, '
/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/core/anchor/anchor_generator.py:369: UserWarning: ``single_level_grid_anchors`` would be deprecated soon. Please use ``single_level_grid_priors``
warnings.warn(
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +521, GPU +0, now: CPU 3797, GPU 3253 (MiB)
Warning: Encountered known unsupported method torch.Tensor.new_tensor
Warning: Encountered known unsupported method torch.Tensor.new_tensor
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1237) [ElementWise]_output and (Unnamed Layer* 1241) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1246) [ElementWise]_output and (Unnamed Layer* 1250) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1255) [ElementWise]_output and (Unnamed Layer* 1259) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1264) [ElementWise]_output and (Unnamed Layer* 1268) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] INFO: [MemUsageSnapshot] Builder begin: CPU 3992 MiB, GPU 2231 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +95, GPU +264, now: CPU 4177, GPU 2495 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +127, GPU +58, now: CPU 4304, GPU 2553 (MiB)
[TensorRT] WARNING: Detected invalid timing cache, setup a local cache instead
[TensorRT] INFO: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[TensorRT] INFO: Detected 1 inputs and 5 output network tensors.
[TensorRT] INFO: Total Host Persistent Memory: 256352
[TensorRT] INFO: Total Device Persistent Memory: 92233216
[TensorRT] INFO: Total Scratch Memory: 401408000
[TensorRT] INFO: [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 139 MiB, GPU 4 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5457, GPU 3273 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 5458, GPU 3283 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5457, GPU 3267 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5457, GPU 3251 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] Builder end: CPU 5456 MiB, GPU 3251 MiB
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation begin: CPU 5456 MiB, GPU 3251 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5456, GPU 3259 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 5457, GPU 3267 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation end: CPU 5457 MiB, GPU 5061 MiB
[TensorRT] WARNING: The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. TensorRT maintains only a single logger pointer at any given time, so the existing value, which can be retrieved with getLogger(), will be used instead. In order to use a new logger, first destroy all existing builder, runner or refitter objects.
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 5636, GPU 5061 (MiB)
[TensorRT] INFO: Loaded engine size: 180 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 5636 MiB, GPU 5061 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5639, GPU 5249 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 5639, GPU 5257 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5639, GPU 5241 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine end: CPU 5639 MiB, GPU 5241 MiB
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation begin: CPU 5639 MiB, GPU 5241 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5639, GPU 5249 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 5640, GPU 5257 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation end: CPU 5640 MiB, GPU 7051 MiB
Can not load dataset from config. Use default CLASSES instead.
/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/datasets/utils.py:66: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
warnings.warn(
Traceback (most recent call last):
File "inference.py", line 59, in <module>
main()
File "inference.py", line 48, in main
result = inference_detector(trt_detector, image_path)
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/apis/inference.py", line 151, in inference_detector
results = model(return_loss=False, rescale=True, **data)
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jwyng2000/PycharmProjects/toymmd/mmdetection/mmdetection-to-tensorrt/mmdet2trt/apis/inference.py", line 188, in forward
segms_results = FCNMaskHead.get_seg_masks(
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py", line 293, in get_seg_masks
masks_chunk, spatial_inds = _do_paste_mask(
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py", line 384, in _do_paste_mask
x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/venv/lib/python3.8/site-packages/torch/functional.py", line 156, in split
return tensor.split(split_size_or_sections, dim)
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/venv/lib/python3.8/site-packages/torch/_tensor.py", line 510, in split
return super(Tensor, self).split(split_size, dim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5459, GPU 6977 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5366, GPU 4987 (MiB)
My code is this
import torch
from mmdet.apis import inference_detector
from mmdet2trt import mmdet2trt
from mmdet2trt.apis import create_wrap_detector
def main():
parser = ArgumentParser()
parser.add_argument('img', help='Image file')
parser.add_argument('config', help='mmdet Config file')
parser.add_argument('checkpoint', help='mmdet Checkpoint file')
parser.add_argument('save_path', help='tensorrt model save path')
parser.add_argument(
'--device', default='cuda:0', help='Device used for inference')
parser.add_argument(
'--score-thr', type=float, default=0.3, help='bbox score threshold')
parser.add_argument(
'--fp16', action='store_true', help='enable fp16 inference')
args = parser.parse_args()
cfg_path = args.config
opt_shape_param = [
[
[1, 3, 320, 320], # min shape
# [1, 3, 800, 800], # opt shape
# [1, 3, 1344, 1344], # max shape
[2, 3, 800, 800], # opt shape
[4, 3, 1344, 1344], # max shape
]
]
trt_model = mmdet2trt(
cfg_path, args.checkpoint, fp16_mode=args.fp16, device=args.device,
enable_mask=True,
opt_shape_param=opt_shape_param,
output_names=["num_detections", "boxes", "scores", "classes"]
)
torch.save(trt_model.state_dict(), args.save_path)
trt_detector = create_wrap_detector(args.save_path, cfg_path, args.device)
image_path = args.img
result = inference_detector(trt_detector, image_path)
trt_detector.show_result(
image_path,
result,
score_thr=args.score_thr,
win_name='mmdet2trt_demo',
show=True)
if __name__ == '__main__':
main()
My comand
python inference.py ../00000010.jpg ../../configs/mask_rcnn/mask_rcnn_r50_fpn_fp16_1x_coco.py ../../checkpoints/epoch_72"(origin)".pth ../../checkpoints/epoch_72"(origin)"_trt1.pth
how can i fix this.
I would be very appreciate if you give answer. Thank you