polyform icon indicating copy to clipboard operation
polyform copied to clipboard

LiDAR Point Cloud Alignment

Open alexrothmaier opened this issue 9 months ago • 9 comments

I am currently using the LiDAR mode in Polycam to capture scenes and export raw data. While I know that I can directly export point clouds, what I want to do now is generate a point cloud for each image using camera parameters and depth maps, and then simply overlay them to create a complete scene point cloud. I have successfully generated a point cloud for each image, but I encountered a problem when fusing them together; there is an offset between the point clouds.

So the deprojection from 2D to 3D points in camera coordinates seems to work, but the inverse extrinsics matrices do not seem to properly align the point clouds.

Do you have any suggestions what could be the cause?

alexrothmaier avatar May 07 '24 12:05 alexrothmaier

Did you find a fix for this @alexrothmaier ? I am facing the same issue

benedictquartey avatar Jul 11 '24 05:07 benedictquartey

Found a related thread on Reddit and we assumed that it does not work because the poses are corresponding to the RGB cameras' position. Hence there is a slight offset to the LiDAR sensor. I could not fix it and switched to StrayScanner to collect my RGB+D dataset.

alexrothmaier avatar Jul 12 '24 11:07 alexrothmaier

Thanks for saving me years haha! I wonder how the Polycam folks are able to get such good looking pointclouds with the same data.

benedictquartey avatar Jul 12 '24 13:07 benedictquartey

hi, how to get point cloud, I am not sure whether there is point cloud file in raw data

xuyanging avatar Jul 15 '24 08:07 xuyanging

I try to visulize camera pose (transfroms.json) and point cloud (.gltf) under same coordinate system, but it seems some wrong? could you help me to figure out, Thx ! image

xuyanging avatar Jul 16 '24 09:07 xuyanging

@xuyanging Typically to create a pointcloud you need to use the depth data and camera poses to backproject 2d pixels from rbg images into 3d space. You need to do this for all images and fuse the individual pointclouds to get one pointcloud of the entire scene. The problem even after doing this is that the fused pointcloud looks clustered and out of place, this is probably due to some alignment issue with the camera poses, I am trying to work on a fix. I will share the code once i figure it out.

benedictquartey avatar Jul 16 '24 13:07 benedictquartey

@benedictquartey

Thanks for your response,

I check output file and find mesh_info.json is important, after appling alignmentTransform matrix, it seems right (as bellow) image

but a new problem is that I try to apply intrinsic of camera to visulize point cloud at each view image, but it seems not correct. image

xuyanging avatar Jul 17 '24 02:07 xuyanging

Hi, @xuyanging. I have downloaded the image files from the Polycam website, but I am having trouble understanding the camera pose information provided. The camera parameters in the folder seem unusual and don't make sense to me, particularly the values for cx, cy, fx, and fy.

Could you please provide some guidance or clarification on how these camera pose parameters are generated or how they should be interpreted? Any suggestions or resources you can offer would be greatly appreciated.

Entongsu avatar Jul 19 '24 08:07 Entongsu

Hi, @xuyanging. I have downloaded the image files from the Polycam website, but I am having trouble understanding the camera pose information provided. The camera parameters in the folder seem unusual and don't make sense to me, particularly the values for cx, cy, fx, and fy.

Could you please provide some guidance or clarification on how these camera pose parameters are generated or how they should be interpreted? Any suggestions or resources you can offer would be greatly appreciated.

@Entongsu cx, cy, fx, fy is camera intrisic parameter, you can refer to this website for more infomation https://www.baeldung.com/cs/focal-length-intrinsic-camera-parameters

xuyanging avatar Aug 30 '24 05:08 xuyanging