LSNet
LSNet copied to clipboard
errors on training custom instance segmentation dataset
I have prepared a custom instance segmentation dataset, which contains 5 classes (not counting background). It worked fine on original MMDetection framework training (such as: detectoRS, mask-RCNN, HTC), but when I modified file lsnet_segm_r50_fpn_1x_coco.py to train on this dataset, the system report errors:
File "/data2/lixuan/workspace/LSNet/code/mmdet/models/dense_heads/lsnet_head.py", line 1299, in loss gt_polygons, gt_bboxes = self.process_polygons(gt_masks, cls_scores) File "/data2/lixuan/workspace/LSNet/code/mmdet/models/dense_heads/lsnet_head.py", line 1742, in process_polygons gt_polygons_stack = torch.stack(gt_polygons) RuntimeError: stack expects a non-empty TensorList
I checked file lsnet_head.py, and found that the gt_masks is empty:
def forward_train(self,
x,
img_metas,
gt_bboxes,
gt_extremes = None,
gt_keypoints = None,
gt_masks = None,
gt_labels = None,
gt_bboxes_ignore=None,
proposal_cfg = None,
**kwargs):
outs = self(x)
print(gt_masks)
input()
results: [PolygonMasks(num_masks=0, height=800, width=1088)]
what causes this error and how can I solve it.
@kklots add coco_lsvr.py to code/configs/base/datasets, modify coco.py at code/mmdet/datasets and loading.py and transforms.py at code/mmdet/datasets/pipelines
Maybe you forgot to modify the CLASSES in coco.py at code/mmdet/datasets. You must comment the 80 classes and add your own classes.