supervision icon indicating copy to clipboard operation
supervision copied to clipboard

[DetectionDataset] - expand `from_yolo` to include support for OBB (Oriented Bounding Boxes)

Open pedbrgs opened this issue 1 year ago • 18 comments

Description

In supervision-0.18.0, we added initial support for OBB; it's time to extend it to include dataset loading.

Make the necessary changes in sv.DetectionDataset.from_yolo to enable loading OBB datasets from disk in YOLO format. Here you can read more about the YOLO OBB Format. In short, each line of the .txt file should have the following format.

class_index, x1, y1, x2, y2, x3, y3, x4, y4

The sv.OrientedBoxAnnotator expects information about oriented bounding boxes to be stored in the xyxyxyxy field of sv.Detections.data. Ensure that the information loaded from the dataset is stored there.

API

Here's an example of how to use the new API. Roboflow allows for the export of segmentation datasets as OBB. Let's ensure that our support for OBB definitely works with datasets exported from Roboflow.

import random
import roboflow
from roboflow import Roboflow
import supervision as sv

roboflow.login()
rf = Roboflow()

project = rf.workspace("roboflow-jvuqo").project("fashion-assistant")
version = project.version(3)
dataset = version.download("yolov8-obb")

train_ds = sv.DetectionDataset.from_yolo(
    images_directory_path=f"{dataset.location}/train/images",
    annotations_directory_path=f"{dataset.location}/train/labels",
    data_yaml_path=f"{dataset.location}/data.yaml"
)

image_name = random.choice(list(train_ds.images))
image = train_data.images[image_name]
detections = train_data.annotations[image_name]

oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
    scene=image.copy(),
    detections=detections
)

Additional

  • Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻

pedbrgs avatar Apr 05 '24 13:04 pedbrgs

Hi, @pedbrgs 👋🏻 Thanks a lot for your interest in Supervision.

That's because, at the moment, DetectionDataset.from_yolo does not support OBB (Oriented Bounding Boxes).

It would be a good idea to convert your question into a feature request and add support for OBB. LEt's do it!

SkalskiP avatar Apr 08 '24 09:04 SkalskiP

@SkalskiP Thanks for considering this! It will be great to have this feature.

pedbrgs avatar Apr 11 '24 20:04 pedbrgs

Hi, @pedbrgs 👋🏻 Fingers crossed, someone from the community. will pick it up.

SkalskiP avatar Apr 11 '24 22:04 SkalskiP

Hey @SkalskiP , I wanted to drop a quick note to inform you that I'm exploring extending the support for OBB datasets. While reviewing the 'load_annotation_yolo' function, I noticed the necessity of adding a boolean parameter to handle Oriented Bounding Boxes efficiently. I'll keep you posted on the progress of my enhancements. Feel free to share any thoughts or suggestions you might have.

nabeelnazeer avatar Apr 13 '24 09:04 nabeelnazeer

Hi, @nabeelnazeer 👋🏻 Should I assign this ticket to you?

SkalskiP avatar Apr 13 '24 09:04 SkalskiP

Sure, go ahead @SkalskiP, I will see what I can do.

nabeelnazeer avatar Apr 13 '24 10:04 nabeelnazeer

Hi @nabeelnazeer, are you currently working on this issue? If not, can I start working?

Bhavay-2001 avatar May 10 '24 06:05 Bhavay-2001

Sure you may... Just ping me if you need any advice or doubts on this one... Iam in the middle of a new project now.. I got caught up with it..go ahead @Bhavay-2001

nabeelnazeer avatar May 10 '24 06:05 nabeelnazeer

@Bhavay-2001 do you want to take this task?

SkalskiP avatar May 13 '24 11:05 SkalskiP

Yes. I will start my work and tag you along

Bhavay-2001 avatar May 13 '24 13:05 Bhavay-2001

@Bhavay-2001 awesome! I'll assign this task to you ;)

SkalskiP avatar May 13 '24 13:05 SkalskiP

Hi @SkalskiP @nabeelnazeer, can you guys pls provide a sample code which I can run and check how is the code working? Thanks

Bhavay-2001 avatar May 15 '24 16:05 Bhavay-2001

Hi @nabeelnazeer @SkalskiP, I checked the code and from an overview it seems that the main changes are needed to be done in this function. The from_yolo function just calls another function and everything boils down to this. A quick code sample to run may produce insights.

Would like to discuss this with you guys.

Bhavay-2001 avatar May 15 '24 17:05 Bhavay-2001

@Bhavay-2001 yup, that's the function you need to update. What code would you need?

SkalskiP avatar May 15 '24 18:05 SkalskiP

Hi @SkalskiP, a code sample to run the output and check the annotations. Basically, I want to check how the annotations show soo that I can find where do I need to make changes too in the code.

Or any idea how can I create a small sample code on which I can run and test this OBB?

Bhavay-2001 avatar May 16 '24 06:05 Bhavay-2001

Hi @Bhavay-2001 :wave:

I haven't tried it myself, but this may work.

import cv2
import supervision as sv
from ultralytics import YOLO

model = YOLO("yolov8n-obb.pt")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)

bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = bounding_box_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)
    
cv2.imshow(annotated_image)

LinasKo avatar May 17 '24 08:05 LinasKo

Hi @LinasKo, I think the feature is to add OBB Format annotations. Soo, basically I want to check what kind of annotations does from_yolo return soo that I can make changes to it.

Could you please tell how can I prepare a small dataset which I use to check this? Like how can i add images and annotations and data_yaml?

Bhavay-2001 avatar May 17 '24 13:05 Bhavay-2001

I see. I believe you can find the YOLO format here: https://docs.ultralytics.com/datasets/detect/#ultralytics-yolo-format As for the YOLO OBB annotations, they're defined here https://docs.ultralytics.com/datasets/obb/#yolo-obb-format as class_index, x1, y1, x2, y2, x3, y3, x4, y4.

Does that make more sense? You should be able to verify by running Ultralytics, doing a little bit of training.

@SkalskiP, do you know if we can use https://docs.ultralytics.com/datasets/obb/dota8/#introduction?

LinasKo avatar May 17 '24 15:05 LinasKo

Hi @LinasKo @SkalskiP, just one more help. The from_yolo function runs the load_yolo_annotations in the backend which returns annotations. Can you please tell what does this annotations represent? Basically, I want to know the shape or the value of this annotations.

Once I have that, there is a method in the ultralytics library to convert data into the Yolo OBB format which I can discuss further. Thanks

Bhavay-2001 avatar May 19 '24 05:05 Bhavay-2001

Try running the load_yolo_annotations, see what happens, see if you can follow your intuition for what it would do. I sense you're on the right track!

If you make a PR, we can adjust it later if your assumptions prove to be slightly incorrect. :slightly_smiling_face:

Tuple[List[str], Dict[str, np.ndarray], Dict[str, Detections]] is the return type, so it should return N names, dict with M x (w x h x c) images, dict with M detections.

LinasKo avatar May 20 '24 07:05 LinasKo

Hi @LinasKo, yes I think I would get an idea by running the load_yolo_annotations function. But for that, I think I need a dataset in the format which is compatible with the function.

Like in this example

train_ds = sv.DetectionDataset.from_yolo(
    images_directory_path=f"{dataset.location}/train/images",
    annotations_directory_path=f"{dataset.location}/train/labels",
    data_yaml_path=f"{dataset.location}/data.yaml"
)

Can you suggest any dataset which I can load using this function? Thanks

Bhavay-2001 avatar May 20 '24 07:05 Bhavay-2001

For that one, you can make a small example yourself with the format description provided.

@SkalskiP, do you know if we can use https://docs.ultralytics.com/datasets/obb/dota8/#introduction?

LinasKo avatar May 20 '24 07:05 LinasKo

Alright. I will check and create one and do some research and get back to you.

Bhavay-2001 avatar May 20 '24 07:05 Bhavay-2001

I took a look at Roboflow Universe, but it looks like there is no easy way to search it for OBB datasets.

SkalskiP avatar May 20 '24 11:05 SkalskiP

Hi @SkalskiP, will I be able to make a small dataset using RoboFlow website in the below format?

train_ds = sv.DetectionDataset.from_yolo(
    images_directory_path=f"{dataset.location}/train/images",
    annotations_directory_path=f"{dataset.location}/train/labels",
    data_yaml_path=f"{dataset.location}/data.yaml"
)

Like I can have images, labels and data.yaml file? I have never prepared one.

Bhavay-2001 avatar May 20 '24 11:05 Bhavay-2001

Hi @LinasKo @SkalskiP, I have tried running a sample code on this fashion assistant dataset only and I have a few things which I want to ask:

  1. When I download the dataset on my local machine, I get a train folder which has both images and labels. The labels are in the format class id, v1, v2, ..... v8. What does this label represents? What are the 8 values?

Because out of these 8 values, we take out 2 min and 2 max values and those are further multiplied with resolution_wh to calculate xyxy.

Bhavay-2001 avatar May 23 '24 18:05 Bhavay-2001

If these 8 values are the corresponding x and y coordinates that we want for the yolo-obb format, then we can skip that code where we calculate min and max values. Thanks

Bhavay-2001 avatar May 23 '24 18:05 Bhavay-2001

Here is the documentation for YOLO-OBB format: https://docs.ultralytics.com/datasets/obb/. Looks like each line of text file is organized this way class_index, x1, y1, x2, y2, x3, y3, x4, y4. Each x and y value is normalized so it looks for example like this: 0 0.780811 0.743961 0.782371 0.74686 0.777691 0.752174 0.776131 0.749758. To load it you'd need to multiply all x values by image width and each y value by image height.

SkalskiP avatar May 24 '24 11:05 SkalskiP

Yes, because in the current code it calculates min and max values from all the labels. Soo I think in order to load in OBB format, we don't want to calculate that and instead we can just calculate normalized values.

I will open a PR and tag you to check if I did it right. Thanks

Bhavay-2001 avatar May 24 '24 11:05 Bhavay-2001

As far as I know, the current version of the code does not support OBB. When you say "current code calculates min and max values from all the labels," could you specify which line of code you are referring to?

SkalskiP avatar May 24 '24 13:05 SkalskiP