Infer depth with given camera intrinsics
Hi, Great Work! I'd like to know is there any way to inference depth maps & points with given fixed camera intrinsics? Is there any way to support it?
Hi, thanks for your interest! Inference with user-provided intrinsics is a good idea and can be quite useful. It was discussed in this issue https://github.com/microsoft/MoGe/issues/24#issuecomment-2493184207.
Note that the model itself is trained to directly estimate point maps without intrinsics inputs. But we can tweak the output to align with a user-provided intrinsics. I am going to support this in the inference code after a few tests.
Hi, excellent wrok, congratulations!
Although we can align with the user-provided intrinsics in post-processing step, I wonder whether it's possible to let the network know the intrinsics and output more accurate point maps? Any ideas?