Jianyuan Wang

Results 238 comments of Jianyuan Wang

Hi, honestly I haven’t touched the 4D-GS code, so I’m probably not the best person to comment. My guess is that when they ran COLMAP on dynamic videos, they likely...

Hi, thanks for your interest and kind words! The issue you’re seeing arises because the tracking head of VGGT was designed and trained specifically for rigid scenes, without dynamic or...

Hi We have not had a clear timeline for the release. For the fine-tuning yes we used that released kubric dataset used for cotracker3. The details I need to double...

Hi @xhchen10 @LeoPerelli , The role of "GT scale \alpha_gt (see Line 536-541 in main.py)" is to ensure that the depth values are properly normalized during the training phase, which...

Hi @JackIRose @Air1000thsummer , in case you still need this, we have shared our training set at https://huggingface.co/datasets/facebook/CoTracker3_Kubric

Hi you can simply use the pretrained encoder following the steps in the readme, e.g.: ``` from vggt.utils.pose_enc import pose_encoding_to_extri_intri from vggt.utils.geometry import unproject_depth_map_to_point_map with torch.no_grad(): with torch.cuda.amp.autocast(dtype=dtype): images =...

Hi @Chinafsh , Yes `repetitive textures` is kind of expected, especially those like "doppelganger". For weak texture, would you mind sharing some examples? I previously did not notice it will...

Hi, 1. Can you check the max and median distribution of the depth conf map? 2. If the max is still around 1.0, it is quite possible that (a) a...

Hey! The filtering bar in the HF demo represents a percentage value. We scale the confidence map to [0, 100] for visualization.

Hi @dreamer1031, Thanks for your interest! Could you share where the recommendation to use square-sized images appears? I think I did not do that but might have unintentionally written this...