direct_visual_lidar_calibration icon indicating copy to clipboard operation
direct_visual_lidar_calibration copied to clipboard

ROS 2 node for camera lidar fusion

Open schreiterjp opened this issue 1 year ago • 8 comments

Hi,

thanks for the great project and easy calibration of lidar and camera.

As a nice addition to the project itself I would like to request a ROS 2 node for the implementation of fusion/overlay from camera image and lidar point cloud. The calibration data could directly be used to publish a fused point cloud.

Don't know what's the best way to republish, but I could imagine something like the following:

  • colorised point cloud (with image pixel values)
  • depth image (mainly from point cloud)
  • projection as a point cloud

A publishing ROS 2 node would be also nice so that the results can be visualised in RVIZ.

Didn't found a project using ROS 2 for that kind of task, also cool if you have any suggestions.

Thanks again for the really nice work!

Cheers jp

schreiterjp avatar Sep 07 '23 08:09 schreiterjp

Thanks for your request. It would be a useful feature, and I may work on this later in a spare time.

koide3 avatar Sep 13 '23 10:09 koide3

Thanks for your reply.

Thought a bit about it again. I mean as you already have the viewer function using the calibration matrix to project the pointcloud to the image, this is maybe the easiest way to build a ros2 node from. At the end a user would have a real time visualisation like in the viewer but then displayed in rviz2 for example.

schreiterjp avatar Sep 13 '23 11:09 schreiterjp

The calibration matrix projects an image point into the lidar 3d point right?

With the inverted matrix I should be able to project the 3d lidar point into a 2d image if I understood it right.

For a first test I though about using open cv with python, specifically the projectPoints function.

schreiterjp avatar Sep 14 '23 10:09 schreiterjp

Yes, T_lidar_camera in the calibration result file is the transformation that brings a point in the camera frame to the LiDAR frame (i.e., pt_lidar = T_lidar_camera * pt_camera).

koide3 avatar Sep 14 '23 10:09 koide3

Hi @koide3,

tried to do the projection using cv2.

import cv2

#saved image from ros image message  with shape (1520, 2688, 3)
image = cv2.imread("xxx.png")

#saved point cloud from ros PointCloud2 message converted to numpy with shape (67719, 3)
point_cloud = np.loadtxt("xxx.txt")

#camera lidar calibration with direct_visual_lidar_calibration

#projection matrix 4x4 from matrix converter 
# projection_matrix = np.float64([[0.06767, -0.11368, 0.99121, -0.02107], 
#                                 [-0.99735, 0.01882, 0.07025, -0.14916], 
#                                 [-0.02665, -0.99334, -0.11211, 0.17919], 
#                                 [0.00000, 0.00000, 0.00000, 1.00000]
#                                 ])

#inversed projection matrix 4x4 from matrix converter
projection_matrix = np.float64([[0.06767, -0.99735, -0.02665, -0.14257], 
                                [-0.11368, 0.01882, -0.99334, 0.17841], 
                                [0.99121, 0.07025, -0.11211, 0.05145], 
                                [0.00000, 0.00000, 0.00000, 1.00000]
                                ])

def fuse_lidar_camera(image, point_cloud, projection_matrix):

    (rvec, jac) = cv2.Rodrigues(projection_matrix[:3, :3])

    tvec = projection_matrix[:3, 3]

    camera_matrix = np.float64([[1718.65976,    0.     , 1345.4385 ],
            [0.     , 1718.88595,  796.10968],
            [0.     ,    0.     ,    1.     ]])
    
    distortion = np.float64([[-0.378833, 0.102727, -0.002767, -0.003375, 0.0]])

    projected_points, _ = cv2.projectPoints(point_cloud, rvec, tvec, camera_matrix, distortion)
    
    # Create a blank image with the same dimensions as the input image
    fused_image = np.zeros((image.shape[0], image.shape[1]), dtype=np.float32)

    # Loop over each pixel in the projected point cloud and compute its depth value
    for point in projected_points:
        x, y = point[0][0], point[0][1]
        cv2.circle(fused_image, (int(x), int(y)), 2, 255, -1)

    grayscale_image = np.dot(image, [0.299, 0.587, 0.114])
    
    # Combine the fused_image with the grayscale input image
    combined_image = np.uint8(0.7*(grayscale_image)+0.3*(fused_image))
    
    return combined_image

res = fuse_lidar_camera(image, point_cloud, projection_matrix)

cv2.imshow("fused", res)
cv2.waitKey(0)
cv2.destroyAllWindows()

The calibration looked really good inside the viewer itself.

But using this projection there is quite a bit of offset in the projection. Please see following crop from resulting image.

Screenshot 2023-09-15 at 15 22 57 (2)

Any idea on how to overcome this issue?

Thanks.

schreiterjp avatar Sep 15 '23 13:09 schreiterjp

The code itself looks good. Did you use pinhole camera model for calibration?

koide3 avatar Sep 25 '23 09:09 koide3

Yes, I used this package: https://index.ros.org/p/camera_calibration/github-ros-perception-image_pipeline/#humble with the pinhole argument.

The calibration matrix is done with the manual calibration (initial_guess_manual) which was visually better than the fine registration. The automatic method was similar to this issue: https://github.com/koide3/direct_visual_lidar_calibration/issues/54

Don't really know where the offset comes from. The camera is mounted above the lidar, so if the result was the other way around it would be plausible and could have been some kind of extrinsic calibration fault.

Thanks for taking the time.

schreiterjp avatar Sep 25 '23 10:09 schreiterjp

@schreiterjp Hi. I managed to solve this issue. This offset issue is based on Intrinsic parameter fx cx fy cy in output which is in correct format. you just went to fx fy cx cy like I did. values may be swapped. Try to check intrinsic parameters. I will share my output here.

image

Yes, I used this package: https://index.ros.org/p/camera_calibration/github-ros-perception-image_pipeline/#humble with the pinhole argument.

The calibration matrix is done with the manual calibration (initial_guess_manual) which was visually better than the fine registration. The automatic method was similar to this issue: #54

Don't really know where the offset comes from. The camera is mounted above the lidar, so if the result was the other way around it would be plausible and could have been some kind of extrinsic calibration fault.

Thanks for taking the time.

jeychandar avatar Apr 25 '24 08:04 jeychandar

here is a package for ros2 that I have created https://github.com/AbdullahGM1/ros2_lidar_camera_fusion_with_detection

AbdullahGM1 avatar Oct 09 '24 20:10 AbdullahGM1

Hey @AbdullahGM1 Does it support for ros2 foxy?

jeychandar avatar Oct 10 '24 04:10 jeychandar

I did not try it in Foxy, you can clone the package and try it

AbdullahGM1 avatar Oct 10 '24 04:10 AbdullahGM1