Re-using Dust3r output with new images
Hi all!
I have a setup where I run dust3r on a few images, then I want to add images to this and run it again.
I've been following the issues #54 , #30 , #17 . And using this ModularPointCloudOptimizer: https://github.com/naver/dust3r/commit/4a414b6406e5b3da3278a97f8cef5acfa2959d0b
I'm wondering if there's anything else that can be preset in a scenario like this. Basically, re-using existing depth_maps with _set_depthmap (doesn't seem to save time, maybe I need to disable grad on the set depth_map?).
I haven't been able to get the normal PointCloudOptimizer with compute_global_alignment(init='known_poses' to work either. Each time I try I get an error about the requires_grad when running scene.preset_principal_point.
I'm mostly just opening this issue in-case you guys can think of a way to incorporate new images into an existing Dust3r scene efficiently, as this is my use case.
Thanks so much for making such an incredible project! Looking forward to Mast3r too :).
Also it's interesting that going from 2 images with the PairViewer and 3 images with the ModularPointCloudOptimizer the scale of the reconstruction often changes quite dramatically! Easy to fix if we track scale and initial pose but just something I noticed~
Re: "Each time I try I get an error about the requires_grad when running scene.preset_principal_point"
Unintuitively, I think that you need to initialize global_aligner with the "optimize_pp=True" option beforehand
This is really helpful! I've gotten further than before.
> /home/relh/Code/???????????/dust3r/dust3r/cloud_opt/init_im_poses.py(61)init_from_known_poses()
60 assert known_poses_msk[n]
---> 61 _, i_j, scale = best_depthmaps[n]
62 depth = self.pred_i[i_j][:, :, 2]
ipdb> print(n)
0
ipdb> best_depthmaps
{1: (4.287663459777832, '1_0', tensor(0.7287, device='cuda:0')), 2: (2.592036485671997, '2_1', tensor(0., device='cuda:0'))}
I've got here and if I get further will update!
This is really helpful! I've gotten further than before.
> /home/relh/Code/???????????/dust3r/dust3r/cloud_opt/init_im_poses.py(61)init_from_known_poses() 60 assert known_poses_msk[n] ---> 61 _, i_j, scale = best_depthmaps[n] 62 depth = self.pred_i[i_j][:, :, 2] ipdb> print(n) 0 ipdb> best_depthmaps {1: (4.287663459777832, '1_0', tensor(0.7287, device='cuda:0')), 2: (2.592036485671997, '2_1', tensor(0., device='cuda:0'))}I've got here and if I get further will update!
For registering the new image, I ran into the problem that the estimated scale is sensitive to the noise. I guess it is due to the procrustes problem is not robust, do you have some idea on it?
This is really helpful! I've gotten further than before.这真的很有帮助!我比以前更进一步了。
> /home/relh/Code/???????????/dust3r/dust3r/cloud_opt/init_im_poses.py(61)init_from_known_poses() 60 assert known_poses_msk[n] ---> 61 _, i_j, scale = best_depthmaps[n] 62 depth = self.pred_i[i_j][:, :, 2] ipdb> print(n) 0 ipdb> best_depthmaps {1: (4.287663459777832, '1_0', tensor(0.7287, device='cuda:0')), 2: (2.592036485671997, '2_1', tensor(0., device='cuda:0'))}I've got here and if I get further will update!我已经到这里了,如果我有进一步的更新!
hi @relh
I also encountered the above problem and solved it with the method hturki mentioned. But the point cloud result I got after preset_poses and preset_intrinsics seems to have problems (my photo was obtained by rotating 360 degrees around the center, and the pose was calculated and set by myself). I wonder if you have encountered this problem? If so, how can I solve it?
Part of the depth estimation part of the process looks like this. It seems that there is no problem. I am very confused as to why there is a problem with the final result.
Also it's interesting that going from 2 images with the
PairViewerand 3 images with theModularPointCloudOptimizerthe scale of the reconstruction often changes quite dramatically! Easy to fix if we track scale and initial pose but just something I noticed~
I have observed this as well. May I ask what should be done to fix this scale issue? If I keep using PairViewer for an incremental reconstruction and pose estimation. Would you mind giving an example? Thank u~
I'm experiencing quite a sizable increase in optimisation time when reconstructing with new images. For example, I first get an output from dust3r for 6 images - storing poses etc. When adding a new image to the set and running through dust3r again with the modular optimiser - it take about 50% longer that with no presets. Is this standard?
I switched to mast3r which uses a cache and it seems to have sped up things when using more than 2 images
I switched to mast3r which uses a cache and it seems to have sped up things when using more than 2 images
I've tried mast3r too when adding new images to a scene with presets but it's still slower than just running the whole scene without any presets. Are you adding new images to a scene with known poses?
Edit: If I increase the LR, I can reduce the reconstruction iterations which seems to take the same time to execute as unintialised optimisation, but with better accuracy in the end.