Tracking head not working as expected
Hey @jytime,
First of all, the work is absolutely and your code is super clean and really easy to work with. Thank you so much for that.
I am really interested in your work's point-tracking application, and I tried running the BMX video from DAVIS. I initialised a bunch of points and just wanted to see how it tracks them. To get the point tracks, in the demo_viser.py script, I passed the query points in the model forward itself, as I saw that it automatically triggers the tracking head.
I have attached the gif of the output; the first frame has the points, which are the query points.
From what I can observe, the points do not move at all and remain stationary. Do you know why that might be happening?
Hi, thanks for your interest and kind words!
The issue you’re seeing arises because the tracking head of VGGT was designed and trained specifically for rigid scenes, without dynamic or deformable motion. This explains why the points remain stationary in the video, which involves significant dynamic motion.
As we noted in the paper, we fine-tuned the VGGT backbone combined with CoTracker’s tracking head specifically for dynamic point tracking scenarios. However, this fine-tuned version hasn’t been released yet. Ideally we’ll consider releasing this model within the CoTracker repository.
Thanks again for your feedback!
Hey @jytime,
Thank you for a quick response and the explanation of the issue. I have some followup regarding the same:
- Do you have a rough timeline by when you can release the cotracker fine tuned weights?
- I am thinning about fine-tuning co-trcker myself if the release will take time, and for that I have a few questions: 2a) For fine-tuning co-tracker, how many videos of Kubrics did you use? Is it similar to the split you used for co-tracker3 ? 2b) Could you share training hyperparemeter configurations like number of epochs, learning rate etc? 2c) Could you share what was the training wall time and across how many gpus? I am trying to gauge the feasibility of me retraining the model.
Really appreciate all the help.
Thank you!
Hi
We have not had a clear timeline for the release. For the fine-tuning yes we used that released kubric dataset used for cotracker3. The details I need to double check, but I remember it is about 1-2 days on 64 GPUs, and 32 GPUs can also work.
I am also interested in the VGG-T fine-tuned version of CoTracker. Looking forward to it!
@jytime Firstly, cograts on the best paper! Fully deserve it!!
Secondly, I was curious, do you have a time line for this release?
Thanks
@jytime Just wanted to follow up on the previous message and ask when the release is planned for the fine-tuned tracking head? Thank you!