SparseInst icon indicating copy to clipboard operation
SparseInst copied to clipboard

images without any annotation

Open xjsxujingsong opened this issue 2 years ago • 4 comments

Hi Thanks for your excellent work. I am trying to use SparseInst to train my own dataset which contains a lot of images without any annotations. I have enabled detectron2 to read such images by FILTER_EMPTY_ANNOTATIONS: False But when it comes to calculate loss, there is one error in loss.py target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])

I guess the reason is there is no actual labels in the sample. Is there anyway to update to code to cover such cases, or maybe we can skip this training iteration if there is no annotation in current batch?

Thanks.

xjsxujingsong avatar Nov 12 '22 10:11 xjsxujingsong

Hi @xjsxujingsong, thanks for your interest in SparseInst! It is a meaningful problem, and we get around to supporting it but we could not fix the code in time. I have some suggestions:

  1. using placeholders to fill the targets of the images without annotations,
  2. supporting empty tensor. We'll fix it in the near future but not guarantee the time.

wondervictor avatar Nov 14 '22 03:11 wondervictor

Hi Thanks for your quick reply. I am currenltly add if to check if there any annotation, but not sure if it is the correct way. if num_instances == 0: src_logits = outputs['pred_logits'] labels = torch.zeros_like(src_logits) # comp focal loss. class_loss = sigmoid_focal_loss_jit( src_logits, labels, alpha=0.25, gamma=2.0, reduction="sum", ) losses = {'loss_ce': class_loss} return losses

xjsxujingsong avatar Nov 14 '22 05:11 xjsxujingsong

@xjsxujingsong Is it working as expected without annotations ?

debasmitdas avatar Mar 07 '23 01:03 debasmitdas

@xjsxujingsong Is it working as expected without annotations ?

Yes, it worked.

xjsxujingsong avatar Mar 08 '23 13:03 xjsxujingsong