YOLOv8-TensorRT
YOLOv8-TensorRT copied to clipboard
Pose Inference Runtime Error
Hi, I am facing the runtime error issue while running the infer-pose.py.
command to export the yolo-nano model:
yolo export model=runs/pose/train13/weights/nano_club_pose_model.pt format=onnx simplify=True opset=11
ONNX to TensorRT conversion command:
/opt/tensorrt/bin/trtexec --onnx=../runs/pose/train13/weights/nano_club_pose_model.onnx --saveEngine=yolov-pose-fp16.engine --fp16
Here is the Nvidia docker container version nvcr.io/nvidia/tensorrt:23.01-py3
TensorRT: 8.5.2.2 CUDA: 12.0.1
I used this command for inference:
python3 infer-pose.py --engine yolo-pose-fp16.engine --imgs ../images/ --out-dir outputs --device cuda:0
Traceback (most recent call last):
File "infer-pose.py", line 112, in <module>
main(args)
File "infer-pose.py", line 37, in main
bboxes, scores, kpts = pose_postprocess(data, args.conf_thres,
File "/app/saim/YOLOv8-TensorRT/models/torch_utils.py", line 48, in pose_postprocess
bboxes, scores, kpts = outputs.split([4, 1, 51], 1)
File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 791, in split
return torch._VF.split_with_sizes(self, split_size, dim)
RuntimeError: split_with_sizes expects split_sizes to sum exactly to 17 (input tensor's size at dimension 1), but got split_sizes=[4, 1, 51]
You should use export-pose.py for export onnx and then build engine.
thanks @triple-Mu, I did that but still faced the same error. I resolve it by replacing outputs.split([4, 1, 51], 1) with outputs.split([4, 1, 12], 1). Now it's working fine. But I got FPS with Preprocessing: 81.73004150509558 | FPS without Preprocessing: 673.3510996949751 (image size =1280x1280, GPU=RTX-4090)
Any suggestion to speedup pre-processing part?
Maybe you can try cuda warp affine
I am use yolov8s-pose-p6.engine, its size (1280, 1280), if image is (1280, 1280), it work fine. But my image size is (1280, 720), the output can not detect any keypoints. how I can use yolov8s-pose-p6.engine to detect my image (1280, 720)? Thanks.
I am use yolov8s-pose-p6.engine, its size (1280, 1280), if image is (1280, 1280), it work fine. But my image size is (1280, 720), the output can not detect any keypoints. how I can use yolov8s-pose-p6.engine to detect my image (1280, 720)? Thanks.
You shoud modify the export script for exporting 1280x720. And modify the cpp code for image height and width
Thanks 從我的iPhone傳送triple Mu @.***> 於 2024年3月1日 晚上11:09 寫道:
I am use yolov8s-pose-p6.engine, its size (1280, 1280), if image is (1280, 1280), it work fine. But my image size is (1280, 720), the output can not detect any keypoints. how I can use yolov8s-pose-p6.engine to detect my image (1280, 720)? Thanks. You shoud modify the expport script for exporting 1280x720. And modify the cpp code for image height and width
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Thanks 從我的iPhone傳送triple Mu @.> 於 2024年3月1日 晚上11:09 寫道: I am use yolov8s-pose-p6.engine, its size (1280, 1280), if image is (1280, 1280), it work fine. But my image size is (1280, 720), the output can not detect any keypoints. how I can use yolov8s-pose-p6.engine to detect my image (1280, 720)? Thanks. You shoud modify the expport script for exporting 1280x720. And modify the cpp code for image height and width —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.>
Hi @tuteming, your both dimensions should be divisible by 32. For 1280x1280 case, your both dimensions were divisible by 32, that's why its working fine. Moreover, we can say that its the loophole of YOLO, it works only dimensions, which are divisible by 32.
Thanks從我的iPhone傳送saim212 @.***> 於 2024年3月4日 下午3:32 寫道:
Thanks 從我的iPhone傳送triple Mu @.> 於 2024年3月1日 晚上11:09 寫道: I am use yolov8s-pose-p6.engine, its size (1280, 1280), if image is (1280, 1280), it work fine. But my image size is (1280, 720), the output can not detect any keypoints. how I can use yolov8s-pose-p6.engine to detect my image (1280, 720)? Thanks. You shoud modify the expport script for exporting 1280x720. And modify the cpp code for image height and width —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.>
Hi @tuteming, your both dimensions should be divisible by 32. For 1280x1280 case, your both dimensions were divisible by 32, that's why its working fine. Moreover, we can say that its the loophole of YOLO, it works only dimensions, which are divisible by 32.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>