rtabmap icon indicating copy to clipboard operation
rtabmap copied to clipboard

RGBD and 2d lidar fusion

Open Boris-Cui opened this issue 1 year ago • 1 comments

hello, i have a small question about principle. hope you can help me. If I use an RGBD camera and a 2D lidar as sensors when using rtab-map, how does rtab-map fuse the two types of data? Does it perform fusion in terms of localization? If so, what method does it utilize? Is it extended Kalman filtering? If I want to further improve the localization accuracy by utilizing UKF in combination with RTAB-Map and other LiDAR SLAM systems, is it feasible?

Boris-Cui avatar Apr 11 '24 09:04 Boris-Cui

If you use that configuration, it is explained in this paper:

The transformation is computed using the same Motion Estimation approach used by visual odometry (Section 3.1.1), and if accepted, the new link is added to the graph. When a laser scan or a point cloud is available, link’s transformation is refined using the same ICP Registration approach than with lidar odometry (described in Section 3.1.2)

So in your case it is a two steps process:

  1. Use RGB-D camera to visually compute localization transform against a node in the map
  2. Refine that transform using lidar scan between the two nodes.

The robot can also be localized by only lidar using proximity detection (described in that paper).

Fusing two global localization approaches (not odometry) is more complicated has they would fight each other from their respective map frames.

matlabbe avatar Apr 12 '24 00:04 matlabbe