Jianyuan Wang
Jianyuan Wang
HI @insomniaaac , You can use omnidata repo, https://github.com/EPFL-VILAB/omnidata/tree/f69ff3aedf983cb34c490a0afd0d29fdd83f6a1c, where Habitat has been rendered there.
Hey it would be released together with our next version
Hi we do not assume the input images have the same intrinsics. You can notice that the focal length for each image is different. However, the principal point is always...
Yeah for the example of ant, it is a known issue. Since we included some dynamic datasets for training (e.g., tartanair, pointo), the model tends to "imagine" that some dynamic...
Hi can you share your image files? If the files are not allowed to share, I guess 1. 3D points are filtered out by hard-coded confidence thres, e.g., https://github.com/facebookresearch/vggt/blob/22d5c18fe6a99aef16b37a863f458935cf7b3120/demo_colmap.py#L192C9-L192C25 2....
I see. It seems just by tuning conf_thres_value to a smaller value can get a much better result. I am going to also export conf_thres_value as an arg. Please let...
if for some scenes the depth conf is all almost 1.0, it means either the scene contains a lot of dynamic pixels, or really little overlap.
For me it works well if you count image and camera from 0, e.g.,: ``` fidx=0 camera = pycolmap.Camera(model="SIMPLE_PINHOLE", width=image_size[0], height=image_size[1], params=pycolmap_intri_pinhole, camera_id=fidx, ) image = pycolmap.Image(id=fidx, name=f"image_{fidx}", camera_id=camera.camera_id, cam_from_world=cam_from_world)...
Yeah I am working on clearning the code and will share something within this week.
I am gradually cleaning and uploading files to the training branch (https://github.com/facebookresearch/vggt/tree/training) of this git repo. After all finished, they will be merged into the main branch.