Jianyuan Wang
Jianyuan Wang
Is this what you are looking for? https://github.com/facebookresearch/vggt/issues/47
hey i cannot see what happens there without the access to the original images
Hey, thanks for sharing. It looks like the images were uploaded by stitching low-resolution frames, so I can’t run them directly. Here are the most plausible issues I can think...
Hi three options may matter here: 1. Use bundle adjustment or not 2. The confidence threshold you use 3. How images are loaded
https://github.com/facebookresearch/vggt/blob/8492456ce358ee9a4fe3274e36d73106b640fb5c/demo_colmap.py#L46
Hi, could you provide a full code snippet to reproduce this behavior? I guess this results from dimension mismatch, e.g., the expected shape of the inputs are detailed here https://github.com/facebookresearch/vggt/blob/6d361a374ea50b040e93fa68fca0ab2cbee0e7a8/vggt/models/vggt.py#L27-L55
Hi, Thanks for your interest! Ideally I should be able to release it in 2-3 weeks.
I am gradually cleaning and uploading files to the training branch (https://github.com/facebookresearch/vggt/tree/training) of this git repo. After all finished, they will be merged into the main branch.
Closing this issue since the complete training code is now available.
Hey would this solve the question? https://github.com/facebookresearch/vggt/issues/140