gaussian-splatting
gaussian-splatting copied to clipboard
Question on the use of GT poses directly into 3DGS
Hi GS-ers,
I'm trying to get a gaussian splat of the ICL NUIM dataset.
I want to use the ground truth path (so without COLMAP) and the existent .ply point-cloud that comes with the dataset scenes.
From what I understood there is an extra transformation to apply to my ground truth path for it to comply to the COLMAP format.
I found one in the scene/dataset_reader.py in readCamerasFromTransforms() :
# NeRF 'transform_matrix' is a camera-to-world transform
c2w = np.array(frame["transform_matrix"])
# change from OpenGL/Blender camera axes (Y up, Z back) to COLMAP (Y down, Z forward)
c2w[:3, 1:3] *= -1
# get the world-to-camera transform and set R, T
w2c = np.linalg.inv(c2w)
R = np.transpose(w2c[:3,:3]) # R is stored transposed due to 'glm' in CUDA code
T = w2c[:3, 3]
However the path still doesn't seem right after the transformation ... Is there something I am missing ?
Thanks for your help, Best regards :)
Update : The origin of the world coordinates are not the same causing the transformation to be "wrong". How could I know precisely the COLMAP world coordinate origin ?
Hey, I work on a similar issue. If I understand your problem correctly you can look at these:
- CloudCompare to gain a transformation matrix of 2 pointclouds.
- This code aligns two .ply files. But you have to update the open3D class usages. o3d.registration to o3d.pipelines.registration... https://www.tanksandtemples.org/tutorial/
Thanks for your answer,
I found the rigid transform between my cloud and the COLMAP one using ICP. But I'm trying to find a systematic answer for this kind of problematic. I still do not understand why it does output a terrible result when using the ground truth poses. The change from c2w to w2c should be enough technically ...
PS: Also tried using my ground truth data with COLMAP and the same kind of result is outputted. We can clearly see that a part of the scene is not being reconstructed (transparent bottom part). Same problem that I had using my GT poses.
I'm using a colored point cloud (points3D.ply) file generated with my GT poses. So no chance it could be miss-aligned with my camera poses.
There's clearly something I'm missing but can't seem to figure out what it is ...