fiftyone-examples icon indicating copy to clipboard operation
fiftyone-examples copied to clipboard

[FR] Case for evaluating initial correctness of labelling

Open senovr opened this issue 2 years ago • 0 comments

Proposal Summary

Abitilty to estimate correctness of ground_truth labels - with or without invoking a pre-trained models

Motivation

I have a question about following use case: Imagine, that we have a dataset that have been already labeled by crowd (i.e., coco) Apparently, there may be some mistakes ( wrong or missing labels) for different objects. Do we have an option to evaluate initial correctness of labelling with fiftyone? I was not able to locate such case in examples - the closest one is Digging into COCO

Example of the workflow:

  • Extract patches from dataset
  • Compute embeddings
  • Compute "similarity" or uniqueness for each class of objects
  • Return similarity.
  • Most dis-similar labels can be filtered in app and evaluated visually
  • Images with incorrect labelling sent back to crowd for re-labelling

Willingness to contribute

The FiftyOne Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?

  • [ ] Yes. I can contribute this feature independently.
  • [x ] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community.
  • [ ] No. I cannot contribute this feature at this time.

senovr avatar Sep 30 '22 09:09 senovr