label-studio-converter
label-studio-converter copied to clipboard
feat: Add YOLO_OBB
Could not test this commit. Any help is appreciated
ref: https://docs.ultralytics.com/datasets/obb/#supported-obb-dataset-formats ref: #280
Codecov Report
Attention: Patch coverage is 90.69767%
with 4 lines
in your changes are missing coverage. Please review.
:exclamation: No coverage uploaded for pull request base (
master@5171df4
). Click here to learn what that means.
Files | Patch % | Lines |
---|---|---|
label_studio_converter/converter.py | 89.47% | 2 Missing :warning: |
label_studio_converter/main.py | 0.00% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #281 +/- ##
=========================================
Coverage ? 49.11%
=========================================
Files ? 22
Lines ? 1859
Branches ? 0
=========================================
Hits ? 913
Misses ? 946
Partials ? 0
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Could you please add tests?
Now it looks almost awesome! :-) The last thing I would add - a few lines in the test where we test for really converted coordinates in the YOLO output, is it possible?
@makseq I can do it, but I do need some help in order to generate a practical example of annotated data in YOLO_OBB format, so that I can be sure it is working. Documentation in https://docs.ultralytics.com/datasets/obb/#supported-obb-dataset-formats is pretty misty. I tried executing
python label_studio_converter/main.py export -i <exported_task>.json -o output_dir --image-dir <image_dir> -f YOLO
but I get the error:
label_studio_converter/converter.py", line 779, in convert_to_yolo
data_key = self._data_keys[0]
Since not even YOLO is working in master branch, I assume that is not a problem with my changes. Also, when I determine -c file, I don't get this error, but I'm not sure what the .xml file should be or how could I export it from label-studio
@ftapajos You need to add labeling config:
python label_studio_converter/main.py export -c path/to/label_config.xml -i <exported_task>.json -o output_dir --image-dir <image_dir> -f YOLO
I do need some help in order to generate a practical example of annotated data in YOLO_OBB format
Let's try to cover it at least with a simple approach like this - just manually calculate one bbox:
Label studio bbox:
{
"x": 7.430746538901997,
"y": 80.15868310961756,
"width": 39.142525358447735,
"height": 44.06258937962363,
"rotation": 294.75527908528767
}
Yolo obb:
1 0.07430746538901997 0.8015868310961756 0.23821421072904478 1.1570419027658772 0.6383486098885134 0.9725327135827279 0.4744418645484886 0.6170776419130263
=> top left = 0.07430746538901997 0.8015868310961756 top right = 0.23821421072904478 1.1570419027658772 bottom left = 0.6383486098885134 0.9725327135827279 bottom right = 0.4744418645484886 0.6170776419130263
and it seems these coordinates are wrong..
Try using this Label config =
<View>
<Image name="image" value="$image"/>
<RectangleLabels name="label" toName="image">
<Label value="Airplane" background="green"/>
<Label value="Car" background="blue"/>
</RectangleLabels>
</View>
Task =
{
"id": 1,
"data": {
"image": "https://data.heartex.net/open-images/train_0/mini/0096ba8fb44c1e2a.jpg"
},
"annotations": [
{
"result": [
{
"original_width": 768,
"original_height": 578,
"image_rotation": 0,
"value": {
"x": 1.2027596671261573,
"y": 79.84292731596452,
"width": 31.60572787517726,
"height": 22.9604606991428,
"rotation": 319.9085956625488,
"rectanglelabels": [
"Car"
]
},
"id": "oid3",
"from_name": "label",
"to_name": "image",
"type": "rectanglelabels",
}
]
}
]
}
and check this piece of code, it works correctly and visualizes the rectangle from the task above:
# pip install opencv-python numpy matplotlib
import cv2
import math
import numpy as np
import matplotlib.pyplot as plt
def get_rotated_corner_points(box):
"""Get the corner points of a rotated bounding box.
Note: It's important to take into account
the original width and height of the image to get the correct coordinates.
Args:
box (dict): The bounding box with the following keys:
- x: The x-coordinate of the top left corner (in percentage, 0-100)
- y: The y-coordinate of the top left corner (in percentage, 0-100)
- width: The width of the box (in percentage, 0-100)
- height: The height of the box (in percentage, 0-100)
- rotation: The rotation angle in degrees (optional)
- original_width: The original width of the image (in pixels)
- original_height: The original height of the image (in pixels)
"""
image_width, image_height = box["original_width"], box["original_height"]
w, h = box["width"] * image_width / 100, box["height"] * image_height / 100
a = math.pi * (box["rotation"] / 180.0) if "rotation" in box else 0.0
cos_a, sin_a = math.cos(a), math.sin(a)
x1, y1 = box["x"] * image_width / 100, box["y"] * image_height / 100 # top left
x2, y2 = x1 + w * cos_a, y1 + w * sin_a # top right
x3, y3 = x2 - h * sin_a, y2 + h * cos_a # bottom right
x4, y4 = x1 - h * sin_a, y1 + h * cos_a # bottom left
return [(x1, y1), (x2, y2), (x3, y3), (x4, y4)]
# Load an example image for background (this is just for visualization)
# download this image here: https://data.heartex.net/open-images/train_0/mini/0096ba8fb44c1e2a.jpg
image = cv2.imread("0096ba8fb44c1e2a.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = image
label = {
"x": 1.2027596671261573,
"y": 79.84292731596452,
"width": 31.60572787517726,
"height": 22.9604606991428,
"rotation": 319.9085956625488,
"original_width": 768,
"original_height": 578,
}
points = get_rotated_corner_points(label)
# test output points
assert points == [
(9.237194243528888, 461.4921198862749),
(194.9315420018524, 305.17056596785676),
(280.3989008248278, 406.69722722112033),
(94.70455306650432, 563.0187811395385),
]
for start, end in zip(points, points[1:] + points[:1]):
cv2.line(image, tuple(np.int0(start)), tuple(np.int0(end)), (255, 0, 0), 2)
# Display the image
plt.figure(figsize=(10, 10))
plt.imshow(image)
plt.axis("off")
plt.show()
It would be great if you could reuse get_rotated_corner_points()
. Also you can reuse assert
for the test.
Thanks @makseq, after the hints you gave me, I could manage to correct the commit changing almost everything in order to get it to work with my dataset . Should I assert the same results you provided?
@ftapajos yes, use it in assert. Also I've put some comment in your code, it's not very well formatted.
@ftapajos please let me know, would you like to finish your contribution?
@makseq I would, but to be fair I'm a little out of time until the end of the next week. I really don't mind if anyone wants to takeover this PR until then
Closed because of this similar PR: https://github.com/HumanSignal/label-studio-converter/pull/287