Segment-Everything-Everywhere-All-At-Once
Segment-Everything-Everywhere-All-At-Once copied to clipboard
How to train your own dataset
Hello, I currently have my own dataset labeled with labelme, which has been converted to Coco format. The list is as follows:
${DATASET_ROOT} # 数据集根目录,例如:/home/username/data/NWPU ├── annotations │ ├── train.json │ ├── val.json │ └── test.json └── images ├── train ├── val └── test
Excuse me, can this be trained directly?
I also want to train my own dataset, but it seems the situation is not very promising.If you find a solution, please share it. Thank you very much.
Hi, there are roughly two steps:
- register your dataset following the sample code in: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/tree/v1.0/datasets/registration
- create a dataset mapper following the sample code in: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/tree/v1.0/datasets/dataset_mappers
you can refer to the code for coco or other datasets used in our training.
"Okay, thank you very much for your patient explanation. I'll give it a try now."
"I have another question. How can I construct JSON files like those in the image based on my annotated dataset?"
Hi, there are roughly two steps:
1. register your dataset following the sample code in: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/tree/v1.0/datasets/registration 2. create a dataset mapper following the sample code in: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/tree/v1.0/datasets/dataset_mappers
you can refer to the code for coco or other datasets used in our training.
To construct JSON file please ask GPT4. Hhh