yolov10
yolov10 copied to clipboard
Could the baseline methods possibly have better results by tuning the NMS parameters?
Dear authors,
Thanks for making this great contribution to the community!
I understand that you are using the original NMS hyper-parameters from the previous YOLO works to benchmark their end-to-end latency. However, I would assume that by simply tuning those hyper-parameters may lead to stronger performance for the baseline YOLO works. For example, if we increase the confidence threshold properly for previous YOLO works, the end-to-end latency may drop significantly due to much fewer predicted bounding boxes. In the meantime, I would expect some drop in terms of AP. But still, the trade-off between accuracy and efficiency could be largely improved.
Given the above reason, I'm wondering whether you have tried to adjust the NMS hyper-parameters in YOLO-v8 (or any other previous YOLO works), so that the end-to-end latency between YOLO-v8n/s/m/l are aligned with those of YOLO-v10n/s/m/l, and compare how much performance gain is achieved by YOLO-v10 in terms of AP? (I don't doubt YOLO-v10 will still outperform the baselines, since the baseline has lower APs even without adjusting NMS hyper-parameters. But it would be nice to see how much the gap is.)
Any information would be appreciated. Thank you in advance!