Some confusion about the paper
Hi, thanks for you great job! I have a question about your paper, In the MOT17 experiment section of the paper, The dataset you used for the test is the MOT17 test dataset or a part of training dataset as the test dataset?
Both are used but for different things. For the test set submissions in the benchmark section, we used the MOT17 test set. For the ablation experiments we used a 50-50 frame split on the training data, i.e., the first 50% of frames of each sequence are for training and the latter 50% for validation.
Thanks for your reply! However, I still a confusion about your work, Is the result of train sets in the public detection and private detection generated by run the track.py? But how to generate the result of test sets in the public detection and private detection respectively?
Ablation studies are evaluated on splits of the training set where we have ground truth. The test set ground truth is not available. One has to generate prediction files and submit them to the https://motchallenge.net/ evaluation server.
@timmeinhardt Thanks for your quickly reply! When I submit the result to the evaluation server, Should I set whether the test is public or private detection in the evaluation server? And where to set it if needed?
There is a setting when u create a tracker on the motchallenge webpage which puts your tracker in either the public or private leaderboard. But this only matters when u want to publish the tracker. The GT for public and private evaluation is the same.
Thanks for your reply! Issue closed