Nikita Karaev
Nikita Karaev
Hi @mett96, @st5714, Indeed, it doesn't work. Thank you for point it out, I will fix the segmentation notebook so it works with the update version of code. In the...
Hi @newforrestgump001, sorry for the late response. Yes, you can check out the [online demo](https://github.com/facebookresearch/co-tracker/blob/0f9d32869ac51f3bd12c5ead9c206366cfb6caea/online_demo.py ). Is this what you are looking for?
Hi @newforrestgump001, yes, that's how this demo works, except for the result is currently updated every four frames, not every frame.
Hi @lutianye, yes, that's exactly how the online demo works: it initialises the model, waits until it has access to the first 8 frames (the sliding window size is 8...
Hi @Anderstask1, I think excluding completely invisible points is a good idea, because they don't contribute to training anyway (unless you have at least one frame where the point is...
Hi @Anderstask1, yes, you either need to remove these points or modify the logic of queried points' sampling to completely ignore the invisible tracks.
Hi @antithing, yes! You might want to check out our paper on structure from motion with point tracking: https://vggsfm.github.io/
Hi @feiwu77777, I think there's now a better solution to your problem. You can run CoTracker in online mode. This way, you don't have to keep the whole video in...
Hi @pvtoan, this variable exists for one reason - to make it possible to visualize the output using the existing tools in this repository. You can safely discard the previous...
Hi @pvtoan, yes, this is done for the same reason, and we should probably fix it. 1. Here we continuously update tracks an visibilities inside the model in the online...