dust3r icon indicating copy to clipboard operation
dust3r copied to clipboard

If i have known camera poses and intrinsics, will the reconstruction be scaled properly?

Open Muhammad0312 opened this issue 1 year ago • 9 comments

Congrats for the awesome work If i know the camera pose and intrinsics and i use preset_pose, preset_focal and preset_principal_point. Will the point cloud be accurately scaled? Thank you

Muhammad0312 avatar May 08 '24 09:05 Muhammad0312

From my data, I find that the scale it smaller than the LiDAR ground truth. Any others get the similar result?

CuriousCat-7 avatar May 27 '24 08:05 CuriousCat-7

It should be scaled proportionally to your poses.

simonpokorny avatar May 30 '24 11:05 simonpokorny

I tried to inject the pose, and the depth scale is now larger than without the pose injection. Regarding the intrinsic, it is not a good idea. Because, the code resizes the image into (512), but we have the original Image intrinsic. Once we distort the original image dimensions, the camera intrinsic needs to be adjusted accordingly. When I tried to inject focals and principal points, my reconstructions were mismatched and weirdly elongated. I knew how to adjust intrinsic when the image is scaled down with the same height and width scale, but here the scale is different and the focal should be one parameter only. I am searching for handling this issue.

Munna-Manoj avatar Jun 04 '24 06:06 Munna-Manoj

From my data, I find that the scale it smaller than the LiDAR ground truth. Any others get the similar result?

I tired to sample my data differently, and the scale is accurate.

CuriousCat-7 avatar Jul 16 '24 08:07 CuriousCat-7

Hello, could I ask if the code for the preset camera pose should look like this? I'm asking because when I preset the pose, the only change to the code is the output pose; the scale of the mesh and the visualization of the camera pose appear the same as without the preset camera pose. This is a concern because the predicted pose is opposite to the camera pose.

''' Load model ''' model_name = "naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt" model = AsymmetricCroCo3DStereo.from_pretrained(model_name).to(device) images = load_images(['ego/396.png', 'ego/397.png'], size=512, square_ok=True) pairs = make_pairs(images, scene_graph='complete', prefilter=None, symmetrize=True) output = inference(pairs, model, device, batch_size=batch_size)

''' Preset the known camera pose ''' transformation_matrix = np.array([ [0.39639117784217504, 0.3700242661774752, -0.8402119234307897, 0.5456278480005436], [0.7599840600945141, -0.6456944320190007, 0.07418172306960222, 0.5909837168630092], [-0.5150711233858591, -0.6679526497615859, -0.5371601204029981, -0.699915119593638], [0, 0, 0, 1] ])

transformation_matrix_2 = np.array([ [0.3899919307857455, 0.3691926781352147, -0.8435656823096248, 0.541349340673925], [0.7659154177137775, -0.6385980830100308, 0.07460604122323877, 0.6028915265397452], [-0.511155423407559, -0.6751957159658493, -0.5318184637018885, -0.6991336847874309], [0, 0, 0, 1]
])

known_pose = torch.tensor(transformation_matrix).to(device) known_pose_2 = torch.tensor(transformation_matrix_2).to(device)

scene = global_aligner(output, device=device, mode=GlobalAlignerMode.PointCloudOptimizer) scene.preset_pose([known_pose, known_pose_2]) loss = scene.compute_global_alignment(init='known_poses', niter=niter, schedule=schedule, lr=lr) outfile = get_3D_model_from_scene(outdir="output", silent=False, scene=scene, as_pointcloud=False)

Thanks.

egil158 avatar Jul 17 '24 06:07 egil158

Hello, could I ask if the code for the preset camera pose should look like this? I'm asking because when I preset the pose, the only change to the code is the output pose; the scale of the mesh and the visualization of the camera pose appear the same as without the preset camera pose. This is a concern because the predicted pose is opposite to the camera pose.

''' Load model ''' model_name = "naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt" model = AsymmetricCroCo3DStereo.from_pretrained(model_name).to(device) images = load_images(['ego/396.png', 'ego/397.png'], size=512, square_ok=True) pairs = make_pairs(images, scene_graph='complete', prefilter=None, symmetrize=True) output = inference(pairs, model, device, batch_size=batch_size) ''' Preset the known camera pose ''' transformation_matrix = np.array([ [0.39639117784217504, 0.3700242661774752, -0.8402119234307897, 0.5456278480005436], [0.7599840600945141, -0.6456944320190007, 0.07418172306960222, 0.5909837168630092], [-0.5150711233858591, -0.6679526497615859, -0.5371601204029981, -0.699915119593638], [0, 0, 0, 1] ]) transformation_matrix_2 = np.array([ [0.3899919307857455, 0.3691926781352147, -0.8435656823096248, 0.541349340673925], [0.7659154177137775, -0.6385980830100308, 0.07460604122323877, 0.6028915265397452], [-0.511155423407559, -0.6751957159658493, -0.5318184637018885, -0.6991336847874309], [0, 0, 0, 1] ]) known_pose = torch.tensor(transformation_matrix).to(device) known_pose_2 = torch.tensor(transformation_matrix_2).to(device) scene = global_aligner(output, device=device, mode=GlobalAlignerMode.PointCloudOptimizer) scene.preset_pose([known_pose, known_pose_2]) loss = scene.compute_global_alignment(init='known_poses', niter=niter, schedule=schedule, lr=lr) outfile = get_3D_model_from_scene(outdir="output", silent=False, scene=scene, as_pointcloud=False)

Thanks.

Bro I guess you forget to preset focal

Vilour avatar Aug 31 '24 06:08 Vilour

Is it possible to pre-set camera pose on MAST3R as well?

pedrozamboni avatar Feb 17 '25 14:02 pedrozamboni

Hello, could I ask if the code for the preset camera pose should look like this? I'm asking because when I preset the pose, the only change to the code is the output pose; the scale of the mesh and the visualization of the camera pose appear the same as without the preset camera pose. This is a concern because the predicted pose is opposite to the camera pose.

''' Load model ''' model_name = "naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt" model = AsymmetricCroCo3DStereo.from_pretrained(model_name).to(device) images = load_images(['ego/396.png', 'ego/397.png'], size=512, square_ok=True) pairs = make_pairs(images, scene_graph='complete', prefilter=None, symmetrize=True) output = inference(pairs, model, device, batch_size=batch_size) ''' Preset the known camera pose ''' transformation_matrix = np.array([ [0.39639117784217504, 0.3700242661774752, -0.8402119234307897, 0.5456278480005436], [0.7599840600945141, -0.6456944320190007, 0.07418172306960222, 0.5909837168630092], [-0.5150711233858591, -0.6679526497615859, -0.5371601204029981, -0.699915119593638], [0, 0, 0, 1] ]) transformation_matrix_2 = np.array([ [0.3899919307857455, 0.3691926781352147, -0.8435656823096248, 0.541349340673925], [0.7659154177137775, -0.6385980830100308, 0.07460604122323877, 0.6028915265397452], [-0.511155423407559, -0.6751957159658493, -0.5318184637018885, -0.6991336847874309], [0, 0, 0, 1] ]) known_pose = torch.tensor(transformation_matrix).to(device) known_pose_2 = torch.tensor(transformation_matrix_2).to(device) scene = global_aligner(output, device=device, mode=GlobalAlignerMode.PointCloudOptimizer) scene.preset_pose([known_pose, known_pose_2]) loss = scene.compute_global_alignment(init='known_poses', niter=niter, schedule=schedule, lr=lr) outfile = get_3D_model_from_scene(outdir="output", silent=False, scene=scene, as_pointcloud=False)

Thanks.

Bro I guess you forget to preset focal

May I ask how to preset focal, is it necessary to preset all intrinsics or only focals?

Michael-Evans-Savitar avatar Mar 01 '25 11:03 Michael-Evans-Savitar

It should be scaled proportionally to your poses.

What should be done specifically, does it mean that gt pose and pred pose should be aligned?

booker-max avatar May 13 '25 04:05 booker-max