ignite
ignite copied to clipboard
Top-K precision/recall multilabel metrics for ranking task
Following the discussion from https://github.com/pytorch/ignite/issues/466#issuecomment-478339986 it would be nice to have such metric in Ignite.
In the context of a multilabel task, compute a top-k precision/recall per label (treating all labels independently).
any update on this?
cc @anmolsjoshi
@Data-drone in case, maybe you could be interested in this temporary solution: https://github.com/pytorch/ignite/issues/513#issuecomment-488983281
Hi! I would like to work on this. Would you please assign this to me ?
@Tanmay06 sure !
@Tanmay06 any updates on this issue ?
I've almost completed it, but I'm getting some bugs. I'll fix those and mostly raise a PR by this weekend.
Sounds good! Do not hesitate to send a draft PR such that we could iterate faster.
Just to be sure I understand well this issues.
1 - By multilabel (in the context of classification), you means that each data point can belong to multiple classes at the same time. For example, in an image classification task, a single image may contain multiple objects, and the model needs to predict all the objects present in the image.
2 - Following the reference given by @RoyHirsch in #466, this look like MultilabelPrecision
[1] and MultilabelRecall
[2] of torchmetrics.classification
.
3 - What's blocking with https://github.com/pytorch/ignite/pull/516, Multilabel Precision and Recall is expected to come in this PR ?
By multilabel (in the context of classification), you means that each data point can belong to multiple classes at the same time.
Yes, exactly, non-exclusive class labels, similar to tags. For 3 classes, ground truth can be y=[0, 1, 1]
or y=[1, 1, 1]
or y=[0, 0, 0]
etc.
2 - Following the reference given by @RoyHirsch in https://github.com/pytorch/ignite/issues/466, this look like MultilabelPrecision [1] and MultilabelRecall [2] of torchmetrics.classification.
I do not know what they are doing those classes MultilabelPrecision [1] and MultilabelRecall [2]. I have to reread https://arxiv.org/pdf/1312.4894.pdf to refresh the idea of what we wanted to compute...
What's blocking with https://github.com/pytorch/ignite/pull/516, Multilabel Precision and Recall is expected to come in this PR ?
This PR is rather old and adding new arg to the API is not a good idea, IMO. Maybe, introducing an arg like average
(see Precision, Recall) would make more sense ...