Tim Meinhardt
Tim Meinhardt
Both are used but for different things. For the test set submissions in the benchmark section, we used the MOT17 test set. For the ablation experiments we used a 50-50...
Ablation studies are evaluated on splits of the training set where we have ground truth. The test set ground truth is not available. One has to generate prediction files and...
There is a setting when u create a tracker on the motchallenge webpage which puts your tracker in either the public or private leaderboard. But this only matters when u...
Did u successfully compile the deformable attention files as described in the install instructions?
The MOT and CrowdHuman dataset only support a single class. But the extension to multiple classes should be very straight forward and require little implementation effort.
I never tried to convert the repo to multiple classes. This is definitely possible and should be straightforward. But you need to change the code accordingly.
Great! Is it working now?
Class name in visdom sounds good. :)
Are u working with a conda environment? Please make sure all versions are correct (Pytorch, CUDA, CUDNN etc.) This is 99% a version issue.
The `query_embed` is the encoding for the decoder to be able to differentiate the object queries. We add a zero-encoding to track queries as they are already refined and more...