UPSNet
UPSNet copied to clipboard
Training on own dataset
I am trying to train the network on a custom dataset where I have the rgb images and panoptic labels in png format. I am following your tip and trying to get the labels in coco format. and I have following questions. It would be a great help if you could help me with them.
- I created coco format instances_train/val2017.json files following
http://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch
(but since i have panoptic labels i have included semantic and instances both in the json---would that be a problem?) and I have created the panoptic_coco_categories.json file as well.
so the original rgb images, images with panoptic labels in png format, categories json file and the instance json file is what I have with this. would it be sufficient for training?
-
How do I run inference on my own images with cityscapes or coco weights? -without providing gt. panoptic inference based only on rgb image input.
-
I was wondering if you use some coco-annotator tool to get the annotations for your dataset to coco format.
-
How can I train on images with more than or less than 3 channels?
-
Is it possible to train only with rgb images and labels in png format? how would that work? thanks :)
Have you solved your problem?I am trying to train the network on a custom dataset too. How to create Panoptic_{}2017.json?