mast3r
mast3r copied to clipboard
Incorporating Known Camera Poses in Global Alignment Optimization
I'm seeking a way to run global alignment optimization with known camera poses. After examining the code, I've identified some challenges:
-
cam2w
matrices are constructed inmake_K_cam_depth
, but they don't directly correspond to standard extrinsic matrices. -
The code handles scale ambiguity by adjusting the translation vector based on focal length and scale, rather than multiplying scales to the pointmaps.
-
This approach leverages the fact that scaling up the pointmap is equivalent to: a) increasing the focal length b) shifting the camera location along the z-axis
-
As a result, when mapping 2D points into 3D space, per-frame scales don't need to be multiplied to depthmaps:
pts3d = proj3d(invK[img], pixels, depthmaps[img][idxs] * offsets)
While this implementation is convenient, it makes incorporating known camera poses challenging because cam2w
is not a pure extrinsic matrix and is entangled with the focal length (an intrinsic parameter).
Questions
- Are there plans to enable the incorporation of known camera poses?
- Is there consideration for changing the code design to facilitate easier integration of known camera poses?
I welcome any opinions or insights on this topic from other people.
Thank you!