Vladimir Zlobin
Vladimir Zlobin
It turns out `ceil()` is a big deal. `print(ultralytics.models.yolo.detect.DetectionValidator(args={"data": "coco.yaml", "device": "cpu"})(model="yolov8n.pt")["metrics/mAP50-95(B)"])` gives `0.3733794927328109` now
Well, it's more complicated. I tried [`max(round(), 1)`](https://github.com/ultralytics/ultralytics/pull/5069/commits/d96cd9c6999237a7bb2881863e71a1dd4807f4c1#diff-0b2d0be08e156856d63e748af252444d26d092bf1ca32e56c4a25074efccc4c9L156), so images of width `0` is not the case. I also have my reimplementation of mAP calculation which minimizes modification of ground...
Ready to merge
FYI, I just tried `yolov5n6u.pt` instead of `yolov8n.pt`. It looks like having `ceil()` for `yolov5n6u.pt`'s reduces mAP50-95... So this PR becomes controversial.
> Given this, we would proceed with caution before incorporating such changes. Will there be someone running benchmarks for this PR?
You report different mAP for `yolov8n.pt` compared to the numbers, I reported in this PR. The problem is that I missed setting `'rect': True` in `print(ultralytics.models.yolo.detect.DetectionValidator(args={"data": "coco.yaml", "device": "cpu"})(model="yolov8n.pt")["metrics/mAP50-95(B)"])`. I...
I will get back to it later. Meanwhile, I want to report another problem. Tested commit 19c3314e68b47c8c5cb799f709868b083921067a from `main`. ```sh python -c "import ultralytics; ultralytics.YOLO('yolov5n6u.pt').val(data='coco8.yaml')" Ultralytics YOLOv8.0.188 Python-3.10.11 torch-2.0.1+cpu CPU...
Is there a way to reproduce mAP reported in https://github.com/ultralytics/ultralytics/blob/2624fc04fb4dc76708d8eafdcfacf0781510757d/docs/models/yolov5.md#supported-modes for `yolov5n6u.pt` without code modification?