trackformer
trackformer copied to clipboard
The MOTP metrics is showing a low score.
Hi, I am using this reference code to do simultaneous detection and tracking on small objects. The average precision and average recall of the detection model shows a good perfromance but when it comes to tracking the MOTP score is very low? Any suggestions on how I can improve the performance ? Also, the MOTA and other metrics are high, so I can't figure out where the prblem actually lies ? @timmeinhardt
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.660
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.946
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.808
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.660
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.692
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.705
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.705
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.705
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
INFO - root - mergeOverall: 0.022 seconds.
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm
Train_Moffat 99.0% 100.0% 98.0% 98.0% 100.0% 1 1 0 0 0 10 0 0 98.0% 0.086 0 0 0
OVERALL 99.0% 100.0% 98.0% 98.0% 100.0% 1 1 0 0 0 10 0 0 98.0% 0.086 0 0 0
Currently, I am only using a single object per image but I will change it to multiple objects when I get improvement on a single object tracking.
Have you tried optimizing the tracking thresholds, for example, detection_obj_score_thresh
and track_obj_score_thresh
. Or visualizing the output? This should give you a good idea of what is wrong.
I will do that and check if there is any improvement. Is there any other way to find the optimal tracking thresholds besides manually changing it with presumption?
You could write a script to find the optimal hyperparameters. But there is no analytic way to find them. At first, I would visualize your outputs to understand whats happening. This could give you an idea what parameters and how to change them.
Hi, I tried to use Visdom to visualize the training and evaluation metrics as suggested in the documents, but the Visdom server is showing a blank blue screen.
I started the Visdom server at port 8097 by running "visdom" in the terminal. I then changed the port number in the config file as well.
To verify the code is connected to the Visdom server, the "Webscoket connected" information is available.
But when I look at the logs at logs/visdom, the folder is empty.
I tried to visualize the bounding box location during validation, and the prediction worked quite well. But for the test data, the prediction is random.
During the training phase, I only used validation data for tracking and didn't use any test data. The validation data itself is in a sequential format. I wanted the test data to be unseen during training.
The -1.23 values are obtained for the test data.
for the validation data, the predictions are quite good.
@timmeinhardt any idea what's causing these issues ?
I managed to load results from visdom. BUt I still can't figure out why the tracking woks on validation data but not the test data. @timmeinhardt