SparseInst
SparseInst copied to clipboard
images without any annotation
Hi Thanks for your excellent work. I am trying to use SparseInst to train my own dataset which contains a lot of images without any annotations. I have enabled detectron2 to read such images by FILTER_EMPTY_ANNOTATIONS: False But when it comes to calculate loss, there is one error in loss.py target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
I guess the reason is there is no actual labels in the sample. Is there anyway to update to code to cover such cases, or maybe we can skip this training iteration if there is no annotation in current batch?
Thanks.
Hi @xjsxujingsong, thanks for your interest in SparseInst! It is a meaningful problem, and we get around to supporting it but we could not fix the code in time. I have some suggestions:
- using placeholders to fill the targets of the images without annotations,
- supporting empty tensor. We'll fix it in the near future but not guarantee the time.
Hi Thanks for your quick reply. I am currenltly add if to check if there any annotation, but not sure if it is the correct way. if num_instances == 0: src_logits = outputs['pred_logits'] labels = torch.zeros_like(src_logits) # comp focal loss. class_loss = sigmoid_focal_loss_jit( src_logits, labels, alpha=0.25, gamma=2.0, reduction="sum", ) losses = {'loss_ce': class_loss} return losses
@xjsxujingsong Is it working as expected without annotations ?
@xjsxujingsong Is it working as expected without annotations ?
Yes, it worked.