co-tracker icon indicating copy to clipboard operation
co-tracker copied to clipboard

📢 Call for Use Cases: We Want to Hear from You! 🛠

Open nikitakaraevv opened this issue 1 year ago • 8 comments

Hey everyone! 👋

First off, a big shoutout to all of you for being part of the CoTracker community. Thanks for the support and for using the project in all the awesome ways you do! 🙌

We’d love to get a better idea of how you’re using CoTracker in your work. Whether it's powering your AI research, building applications, or something else entirely — we’re super curious to know! 🚀 And if there’s anything you think could be improved or added, we’re all ears.

You can drop a comment right here or feel free to reach out to us directly via email at [email protected] if that’s easier. We’d love to hear from you either way!

Hearing about your experiences helps us figure out what’s working well and what might need some tweaking. So if you’ve got a minute, it’d be awesome to hear your thoughts!

Thanks again for being a part of this journey! We’re excited to see what you’re all up to. 🌟

nikitakaraevv avatar Oct 17 '24 09:10 nikitakaraevv

Thank you for starting this discussion! I tried out the HF demo with the bear example and noticed that it didn’t track the feet as expected. I’m curious if this is a particularly challenging case for point tracking? I’m not trying to compare models, just trying to understand what might be happening.

image

Khoa-NT avatar Oct 18 '24 01:10 Khoa-NT

Hi, I am keenly following the development of these point trackers. My use case is tracking the flight/movment of insects in natural enviroments, usually in cluttered background with lots of occlusion. generally I need atleast 2 points on the insect but the ability to track many points opens up a wealth of information about animal movements. I have tried the demo verison, but since I'm lacking in technical skills (took me a while to figure out how to change the path to the file!), I still havent been able to run it on my own videos. Your documentation is good, but it's still very complicated to follow for someone like me coming from a biology background.

dinrao avatar Oct 23 '24 00:10 dinrao

Hi @dinrao, thank you for sharing your use case! Did you try the huggingface demo as well? it should be relatively straightforward to upload your own video there: https://huggingface.co/spaces/facebook/cotracker What do you want to achieve by tracking insects?

nikitakaraevv avatar Oct 23 '24 11:10 nikitakaraevv

Yes I tried the demo, it works well, but I have longer videos. Here's an example of how tracking helps in animal behaviour. We manually tracked wasps approaching spiders on flowers to see if they can detect them from a distance (see supplementary videos for an example).

dinrao avatar Oct 23 '24 17:10 dinrao

I would like to know what the format of your training set is, and how I can convert my own video into the same format as your training set through annotation or other methods. Can you give me a sample or tutorial? Thank you very much.

JackIRose avatar Oct 26 '24 06:10 JackIRose

I would like to know what the format of your training set is, and how I can convert my own video into the same format as your training set through annotation or other methods. Can you give me a sample or tutorial? Thank you very much. I have a video of mouse tracking. I want to track the nose and ears of the mouse, but the performance of the original model is not very good, so I want to know how to construct a training set for fine-tuning.

JackIRose avatar Oct 26 '24 06:10 JackIRose

Hi! I’m working on using CoTracker for 6D pose estimation of objects and am experimenting with ways to improve robustness. My approach involves adding an initialization video that shows the object from multiple angles, then querying points from each view to help the model better recognize the object.

However, I've encountered a significant issue: CoTracker's feature extraction doesn’t seem to be 2D rotation-invariant. When the object appears upright in the query frame but is upside down in an inference frame, the model struggles to recognize it as the same object. For example, in the first image below (query frame), the object is upright. In later images (later in the video), as we rotate the object to 180 degrees, the model performance starts decreasing. However, as we rotate it again 180 degrees, the model will start recovering as it realizes it's still looking at the same object from the query image. In the second image (later in the video), where the object is rotated 90 degrees, the model’s performance decreases noticeably. In the third image (later in the video), it’s upside down, and CoTracker struggles to recognize it as the same object. By the fourth image (even later in the video), where the object is rotated 90 degrees again, the model’s performance improves noticeably. Finally, in the last image, the object is almost returned to the original position and CoTracker recovers almost fully.

My question: Is there an easy way to use multiple query frames per point? Specifically, can I link different views of the same point across frames? This way, I could provide rotated views of the original query frame, giving the model multiple examples of the same point when the object undergoes significant rotation.

If there’s no built-in way to achieve this, do you have any suggestions for how I might integrate this feature into CoTracker’s architecture without needing to retrain the model? I have been trying and struggling.

Thanks in advance! I understand this isn’t the primary use case for the model, but I’m hopeful that adding rotation handling could enhance its performance.

Query 90 degrees 180 degrees 270 degrees ~315 degrees
image image image image image

Jafonso-sudo avatar Nov 11 '24 12:11 Jafonso-sudo

Use DIFT https://arxiv.org/pdf/2306.03881 Thank me later

jtattershall-ga avatar Dec 12 '24 15:12 jtattershall-ga