OBBDetection icon indicating copy to clipboard operation
OBBDetection copied to clipboard

About dataset

Open ZacharySha opened this issue 3 years ago • 10 comments

Hi, thanks for your work!!! I didn't find detailed instructions for dataset production in readme. If I want to train my own dataset (PNG and JSON formats), is the data preparation the same as MMdetection?

ZacharySha avatar Sep 17 '21 06:09 ZacharySha

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.

But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

jbwang1997 avatar Sep 17 '21 09:09 jbwang1997

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.

But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

hust-lidelong avatar Sep 18 '21 04:09 hust-lidelong

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

jbwang1997 avatar Sep 18 '21 08:09 jbwang1997

Hi, thanks for your work!!! I didn't find detailed instructions for dataset production in readme. If I want to train my own dataset (PNG and JSON formats), is the data preparation the same as MMdetection?

Could you provide the structure of JSON and tell me whether your images need to be split like DOTA dataset?

jbwang1997 avatar Sep 18 '21 08:09 jbwang1997

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt

and my images do not need to be split.

hust-lidelong avatar Sep 18 '21 08:09 hust-lidelong

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt

and my images do not need to be split.

Could you give some advice? Thanks!

hust-lidelong avatar Sep 20 '21 06:09 hust-lidelong

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

jbwang1997 avatar Sep 20 '21 15:09 jbwang1997

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

Thanks very much!

hust-lidelong avatar Sep 20 '21 15:09 hust-lidelong

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

Thanks very much!

请问您这个问题解决了吗,我现在也打算训练自己的xml数据集,可以说说具体怎么做吗

ccccwb avatar Oct 25 '21 03:10 ccccwb

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

请问[xml_style.py]文件应该怎么使用呢

EagleHong avatar Oct 25 '21 08:10 EagleHong