YOLOv7_Tensorrt
YOLOv7_Tensorrt copied to clipboard
How the mAP of Yolov7 with TensorRT ?
can you test the mAP of Yolov7 in coco2017val with TensorRT? Thank you.
The coco dataset is so large that I cannot download. So sorry, you can try it by yourself.
The result of Yolov7 tested in coco2017val is as follows:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.481 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.659 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.521 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.308 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.533 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.644 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.364 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.561 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.578 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.382 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.636 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.749
the mAP gap between Pytorch and tensorrt is more than 3%
Do you keep the IOU and score threshold of NMS the same? The defaults are 0.45 and 0.25. This affects the Map. 3% is a relatively large accuracy gap.
the IoU and score threshold is same as the Yolov7 test.py
Do you keep the IOU and score threshold of NMS the same? The defaults are 0.45 and 0.25. This affects the Map. 3% is a relatively large accuracy gap.
pytorch mAP in coco2017val
-
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.512
-
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.697
-
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.555
-
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.353
-
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.559
-
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.667
-
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.385
-
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.638
-
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.688
-
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.538
-
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735
-
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.838
can you test the mAP of Yolov7 in coco2017val with TensorRT? Thank you.
Were you able to calculate the mAP of the .trt model?
@egbertYeah Were you able to calculate the mAP of the .trt model? if yes can you please share the code to do so
@egbertYeah Were you able to calculate the mAP of the .trt model? if yes can you please share the code to do so
You can refer to this file. https://github.com/egbertYeah/mt-yolov6_tensorrt/blob/main/eval_yolov6.py
@egbertYeah Were you able to calculate the mAP of the .trt model? if yes can you please share the code to do so
You can refer to this file. https://github.com/egbertYeah/mt-yolov6_tensorrt/blob/main/eval_yolov6.py
thank you for your reply @egbertYeah , but i have trained my yolov7 model on custom dataset and then generated its .trt model, would this code run properly for it because the ground truth annotations of my dataset are in .txt format
@egbertYeah Were you able to calculate the mAP of the .trt model? if yes can you please share the code to do so
You can refer to this file. https://github.com/egbertYeah/mt-yolov6_tensorrt/blob/main/eval_yolov6.py
thank you for your reply @egbertYeah , but i have trained my yolov7 model on custom dataset and then generated its .trt model, would this code run properly for it because the ground truth annotations of my dataset are in .txt format
which mAP format you want to compute?COCO or VOC?
@egbertYeah Were you able to calculate the mAP of the .trt model? if yes can you please share the code to do so
You can refer to this file. https://github.com/egbertYeah/mt-yolov6_tensorrt/blob/main/eval_yolov6.py
thank you for your reply @egbertYeah , but i have trained my yolov7 model on custom dataset and then generated its .trt model, would this code run properly for it because the ground truth annotations of my dataset are in .txt format
which mAP format you want to compute?COCO or VOC?
COCO i.e. over 10 iou threshold values....i want results as mention in the below image
you can convert the YOLO(.txt) format to COCO format,then using COCO API to get COCO mAP.
hey @egbertYeah! I'm getting this error while running eval_yolov6.py
TensorRT inference engine settings:
- Inference precision - DataType.FLOAT
- Max batch size - 1
Loading cached TensorRT engine from /content/gdrive/MyDrive/tensorrt-python/best-fp16.trt
[04/11/2023-05:13:15] [TRT] [E] 3: getPluginCreator could not find plugin: EfficientNMS_TRT version: 1
[04/11/2023-05:13:15] [TRT] [E] 1: [pluginV2Runner.cpp::load::303] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
Traceback (most recent call last):
File "/content/gdrive/MyDrive/mt-yolov6_tensorrt/eval_yolov6.py", line 144, in
can you please help me with this? thanks!