Added support by using oriented_box_iou_batch
Add Support for Oriented Bounding Boxes in MeanAveragePrecision
Changes
- Modified
MeanAveragePrecisionclass to support oriented bounding boxes - ~~Added integration tests for
MeanAveragePrecision~~
Details
-
MeanAveragePrecisionnow handlesORIENTED_BOUNDING_BOXESas a metric target usingoriented_box_iou_batch.
@LinasKo I have created this PR by making the required changes in the code according to the issue #1562, Please review it and let me know if any changes are required.
Hi @PrakharJain1509 ,
Thank you very much! I'll have a look and review it tomorrow morning. Thank you for your contribution!
Hi @PrakharJain1509 ,
Thank you very much! I'll have a look and review it tomorrow morning. Thank you for your contribution!
Sure, Thanks
Hi @PrakharJain1509 ,
Thank you very much! I'll have a look and review it tomorrow morning. Thank you for your contribution!
@LinasKo Sir, Any updates regarding this?
Looking at it right now :wink:
Hi @PrakharJain1509 👋
I can only see one commit at the moment, and that does not include everything you mentioned. Is there some code you haven't pushed yet?
Added integration tests for MeanAveragePrecision New tests cover various scenarios including perfect overlap, no overlap, and rotated boxes
Normally we ask to make a Colab, as that makes the review process much faster - we don't need to create an additional environment where we can run and tweak the values, and we get a bit of code describing the current behaviour that we can look back at in the future. if you're adding well-rounded tests - the Colab is optional.
Hi @PrakharJain1509 👋
I can only see one commit at the moment, and that does not include everything you mentioned. Is there some code you haven't pushed yet?
Added integration tests for MeanAveragePrecision New tests cover various scenarios including perfect overlap, no overlap, and rotated boxes
Normally we ask to make a Colab, as that makes the review process much faster - we don't need to create an additional environment where we can run and tweak the values, and we get a bit of code describing the current behaviour that we can look back at in the future. if you're adding well-rounded tests - the Colab is optional.
The changes which I made so far are correct or they are also invalid? Also though that the function is already implemented so I just have to use it in MeanPrecisionClass . Pls guide me for how to proceed or what steps should I take in order to resolve.
Sure! The steps so far are correct. Did you write any code for the tests like you said?
Added integration tests for
MeanAveragePrecision
If you did, the code is not visible in Files Changed section. In which case, you have to commit the code and push it to your develop branch. It will be visible on the PR when you do.
Sure! The steps so far are correct. Did you write any code for the tests like you said?
Added integration tests for
MeanAveragePrecisionIf you did, the code is not visible in Files Changed section. In which case, you have to commit the code and push it to your
developbranch. It will be visible on the PR when you do.
Okay so only the test codes part is remaining? Please give me some time, I will write that part and update you by tom morning Thanks
Unit tests are welcome, but typically not required. However, since you wrote in the PR description that unit tests are included, I expect to see the code I'm reviewing to include them 😄
What we really like is a Colab that shows how to use the new feature. It should run the MeanAveragePrecision metric on OBB results. For example, you can use yolo11n-obb, yolo11s-obb, yolo11m-obb and compare the results. You should also test what happens when you attempt to compute mAP for OBB detections that don't overlap, and detections that are empty (created with sv.Detections.empty())
Unit tests are welcome, but typically not required. However, since you wrote in the PR description that unit tests are included, I expect to see the code I'm reviewing to include them 😄
What we really like is a Colab that shows how to use the new feature. It should run the
MeanAveragePrecisionmetric on OBB results. For example, you can useyolo11n-obb,yolo11s-obb,yolo11m-obband compare the results. You should also test what happens when you attempt to computemAPfor OBB detections that don't overlap, and detections that are empty (created withsv.Detections.empty())
I looked into the Tests, but I am unable to test it. Can you please help me with the testing and tell if the changes I already did are fine or need some modifications.
Unit tests are welcome, but typically not required. However, since you wrote in the PR description that unit tests are included, I expect to see the code I'm reviewing to include them 😄
What we really like is a Colab that shows how to use the new feature. It should run the
MeanAveragePrecisionmetric on OBB results. For example, you can useyolo11n-obb,yolo11s-obb,yolo11m-obband compare the results. You should also test what happens when you attempt to computemAPfor OBB detections that don't overlap, and detections that are empty (created withsv.Detections.empty())
Also I have create a google colab in which I have loaded the whole MeanPrecision code, but am unable to test it, Could you help me with that part.
Hi @PrakharJain1509,
I won't be able to help with testing, but can give some pointers regarding the Colab. If we can create the Colab that checks how the new functionality works, then unit testing won't be necessary. Please share a link to the Colab.
I am also opening this up to the community for a second contributor.
Hi @PrakharJain1509,
I won't be able to help with testing, but can give some pointers regarding the Colab. If we can create the Colab that checks how the new functionality works, then unit testing won't be necessary. Please share a link to the Colab.
I am also opening this up to the community for a second contributor.
Okay so I am done with the testing part and I am ready with the colab link Please have a look into it and tell me if anything else is required. Also I made one more commit doing some minor changes.
Here is the link : https://colab.research.google.com/drive/1m8RK27dEd2D6HfHVnzRr0-npTjfVQwjp?usp=sharing
@LinasKo Did you look into it sir?
@LinasKo Did you look into it sir?
@LinasKo Please Look into the colab file and let me know if any changes are required.
Hi @PrakharJain1509,
Good work with the Colab. Stepping into a new repository and having to learn how everything is put together is tricky, and I can see you're putting good effort into it. Especially the metric tests at the end - that is definitely going in the right direction.
Here are a few points of improvement:
- Repo installation can be made simpler: Starter Template.
- We can add a few more use cases, e.g. showing metric results when predictions/targets are empty.
- Your latest commit reverts some code we have on
developbranch. We should instead have mergeddevelopinto your branch instead. - This PR only adds OBB support to mAP. F1 score should be covered as well.
With that in mind, while this has more issues than many other PRs, I'm quite happy with the result. I'll accept your contributio, take over from here, and prepare it for merging. Thank you very much! 🙏
- We can add a few more use cases, e.g. showing metric results when predictions/targets are empty.
@LinasKo The second last block of code, just before the plots is about empty detections. Thank you for accepting this PR and please let me know in future if anything is required.
Test colab: https://colab.research.google.com/drive/1r5Z0No4cj0px3M_tbnG8mjk3JRjXsivF?usp=sharing
Merging - everything seems to be in order now.
Thank you for the contribution @PrakharJain1509. Expect to see it included in the next supervision release! 🤝