TrackEval icon indicating copy to clipboard operation
TrackEval copied to clipboard

HOTA (and other) evaluation metrics for Multi-Object Tracking (MOT).

Results 98 TrackEval issues
Sort by recently updated
recently updated
newest added

how to evaluate mutil class dataset?

Which of the HOTA implementation is classification-aware?

Thank you for your great work! This repository can evaluate the training sets in MOT. And how can i evaluate the testing sets in MOT?

Thank you for your great work! I wonder whether your metrics can measure the detection and tracking performance for MOT17Det and MOT20Det? Thank you so much!

i try to use **run_mot_challenge.py** tool to evaluate my tracker on my custom dataset. Before evalutation, i have change default_dataset_config['CLASS_TO_EVAL'], self.valid_classes, self.class_name_to_class_id, distractor_class_name etc in **mot_challenge_2d_box.py**. But the output MOTA...

Hi, Can I evalutate MODA metric using this code? MODA (Multi-Object Detection Accuracy): the metric for object detection. (MOT17Det, MOT20Det)

This PR adds the detection metrics from the MOT Challenge. There are some differences between this implementation and the original implementation. I have outlined them here: https://docs.google.com/document/d/1UTR4G8va_fe-KsNZhrQJNE5w_7F9fPBVy9G4dKWrBfk/preview

I noticed a code [here](https://github.com/JonathonLuiten/TrackEval/blob/bcd03a6cc5f4fa0074da89d95e859fe77e264c3e/trackeval/datasets/mots_challenge.py#L267) not allowing the masks of different objects to have overlap. Is there any reason or insight to set like this? Considering the recent instance segmentation...

Hello, I'm trying to run the MOT challenge script on a custom dataset. I think I've got the folder structure right (including all the .ini files) and my gt files...

Hi, thanks for your amazing work. But I got different MOTA score when using evaluation code in ByteTrack and TrackEval (72.5 vs. 73.275). Do you know the reason of that?...