polyform
polyform copied to clipboard
LiDAR Point Cloud Alignment
I am currently using the LiDAR mode in Polycam to capture scenes and export raw data. While I know that I can directly export point clouds, what I want to do now is generate a point cloud for each image using camera parameters and depth maps, and then simply overlay them to create a complete scene point cloud. I have successfully generated a point cloud for each image, but I encountered a problem when fusing them together; there is an offset between the point clouds.
So the deprojection from 2D to 3D points in camera coordinates seems to work, but the inverse extrinsics matrices do not seem to properly align the point clouds.
Do you have any suggestions what could be the cause?
Did you find a fix for this @alexrothmaier ? I am facing the same issue
Found a related thread on Reddit and we assumed that it does not work because the poses are corresponding to the RGB cameras' position. Hence there is a slight offset to the LiDAR sensor. I could not fix it and switched to StrayScanner to collect my RGB+D dataset.
Thanks for saving me years haha! I wonder how the Polycam folks are able to get such good looking pointclouds with the same data.
hi, how to get point cloud, I am not sure whether there is point cloud file in raw data
I try to visulize camera pose (transfroms.json) and point cloud (.gltf) under same coordinate system, but it seems some wrong? could you help me to figure out, Thx !
@xuyanging Typically to create a pointcloud you need to use the depth data and camera poses to backproject 2d pixels from rbg images into 3d space. You need to do this for all images and fuse the individual pointclouds to get one pointcloud of the entire scene. The problem even after doing this is that the fused pointcloud looks clustered and out of place, this is probably due to some alignment issue with the camera poses, I am trying to work on a fix. I will share the code once i figure it out.
@benedictquartey
Thanks for your response,
I check output file and find mesh_info.json is important, after appling alignmentTransform matrix, it seems right (as bellow)
but a new problem is that I try to apply intrinsic of camera to visulize point cloud at each view image, but it seems not correct.
Hi, @xuyanging. I have downloaded the image files from the Polycam website, but I am having trouble understanding the camera pose information provided. The camera parameters in the folder seem unusual and don't make sense to me, particularly the values for cx, cy, fx, and fy.
Could you please provide some guidance or clarification on how these camera pose parameters are generated or how they should be interpreted? Any suggestions or resources you can offer would be greatly appreciated.
Hi, @xuyanging. I have downloaded the image files from the Polycam website, but I am having trouble understanding the camera pose information provided. The camera parameters in the folder seem unusual and don't make sense to me, particularly the values for cx, cy, fx, and fy.
Could you please provide some guidance or clarification on how these camera pose parameters are generated or how they should be interpreted? Any suggestions or resources you can offer would be greatly appreciated.
@Entongsu cx, cy, fx, fy is camera intrisic parameter, you can refer to this website for more infomation https://www.baeldung.com/cs/focal-length-intrinsic-camera-parameters