yolov9 icon indicating copy to clipboard operation
yolov9 copied to clipboard

Evaluation Test - Pytorch ( Original) vs TensorRT

Open levipereira opened this issue 4 months ago • 0 comments

Hi @WongKinYiu,

I successfully validated the YOLOv9 models using the Yolov9 End2End implementation, and the results were quite satisfactory. I'm considering opening a pull request to implement the End2End functionality, as I've managed to validate it end-to-end.

While I still owe the performance latency tests, validating the evaluation was crucial to ensuring everything is functioning correctly. It's worth noting that all the tests I've developed on the triton-server depend entirely on the End2End implementation.

Source Report

Evaluation Test on TensorRT

Evaluation test was perfomed using this client

Models Details

Model (ONNX) Test Size APval AP50val AP75val
YOLOv9-C (FP16) 640 52.8% 70.1% 57.7%
YOLOv9-E (FP16) 640 55.4% 72.6% 60.3%

Evaluation Test original (PyTorch)

Model Test Size APval AP50val AP75val
YOLOv9-C 640 53.0% 70.2% 57.8%
YOLOv9-E 640 55.6% 72.8% 60.6%

Evaluation Comparasion TensorRT vs PyTorch

Model (ONNX) TensorRT Model PyTorch Test Size APval AP50val AP75val
YOLOv9-C (FP16) YOLOv9-C 640 -0.2% -0.1% -0.1%
YOLOv9-E (FP16) YOLOv9-E 640 -0.2% -0.2% -0.3%

Evaluation Test can be reproduced as follow

git clone https://github.com/levipereira/triton-server-yolo.git
cd triton-server-yolo
# Start Docker container
bash ./start-container-triton-server.sh

## inside container
cd /apps
bash ./start-triton-server.sh  \
--models yolov9-c,yolov9-e \
--model_mode eval \
--efficient_nms enable \
--opt_batch_size 1 \
--max_batch_size 1 \
--instance_group 1 

Open a new Shell session

# recommended use python virtual environment
git clone https://github.com/levipereira/triton-client-yolo.git
cd triton-client-yolo
pip install -r requirements.txt
## download coco manually if you dont have it
bash ./scripts/get_coco.sh
python3 coco_eval.py --model yolov9-c --data data/coco.yaml

Output example:


==================== TRITON SERVER ====================
Evaluating Model:  yolov9-c
Inferencing images: 100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [02:20<00:00, 35.66it/s]

Evaluating pycocotools mAP... saving ./_predictions.json...
loading annotations into memory...
Done (t=0.32s)
creating index...
index created!
Loading and preparing results...
DONE (t=6.19s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=73.49s).
Accumulating evaluation results...
DONE (t=22.59s).

========================= EVALUATION SUMMARY - YOLOV9-C ========================
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.528
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.700
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.576
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.361
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.582
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.689
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.392
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.652
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.702
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.539
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.758
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.848
================================================================================
[email protected]:0.95: 0.528
[email protected]:      0.7
[email protected]:     0.576
================================================================================

levipereira avatar Mar 06 '24 19:03 levipereira