py-motmetrics icon indicating copy to clipboard operation
py-motmetrics copied to clipboard

Is py-motmetrics missing any metrics/datasets?

Open cheind opened this issue 5 years ago • 9 comments

Hey,

this is a community question: are you aware of any metrics/datasets that py-motmetrics does not support out of the box, but that you wish it did? If yes, please let us know by citing the related publication.

cheind avatar Feb 04 '20 17:02 cheind

DETRAC: https://arxiv.org/pdf/1511.04136.pdf

fguney avatar Feb 18 '20 18:02 fguney

@fguney I've scrolled through the paper. It does not seem to propose other metrics than py-motmetrics already implements. As far as the file format goes, we now support this via #53

cheind avatar Jul 28 '20 06:07 cheind

The problem with the metrics on UA-DETRAC is that they use "ignore zones", which are not annotated, and are a source of false positives (as many cars can be detected there). I think this is not taken into account in the current motmetrics, right?

agirbau avatar Feb 07 '21 10:02 agirbau

I think that, at the moment, ignore zones should be handles by the user before calling the toolkit functions.

There is an exception: For the MOT challenge datasets, there seems to be a second script (apps/evaluateTracking.py instead of apps/eval_motchallenge.py) that calls a different function (utils.CLEAR_MOT_M() instead of utils. compare_to_groundtruth()), which uses preprocess.preprocessResult() to remove predictions that are matched to "ignore" classes in MOT. I think, however, that this is not identical to an ignore zone though, because each ignore zone can only eliminate one detection.

If this needs to be implemented for detrac, you could modify apps/eval_detrac.py to take care of this? (and make a PR?)

jvlmdr avatar Feb 08 '21 09:02 jvlmdr

I generated a PR to integrate the ignore zones for the UA-DETRAC dataset. The main problem is the speed, as it loops over all the detections / ground truth and compares it to every ignore zone. I think it could be highly parallelizable though.

agirbau avatar Feb 08 '21 10:02 agirbau

How about the AOGM measure for cell tracking? It is used in the Cell Tracking Challenge. https://cbia.fi.muni.cz/software/aogm-measure.html

michih8 avatar Oct 24 '21 09:10 michih8

@michih8 any chance you could provide a PR?

cheind avatar Oct 28 '21 03:10 cheind

HOTA would be nice to include. It's now the default tracking metric in KITTI and the motivations for this metric seem reasonable in general, not just for the KITTI dataset.

The reference implementation of HOTA is here: https://github.com/JonathonLuiten/TrackEval or more specifically here https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/metrics/hota.py

ahrnbom avatar Nov 22 '21 11:11 ahrnbom

@ahrnbom, yes that seems like a good idea. Could you provide a PR?

cheind avatar Nov 22 '21 12:11 cheind