Add wheel odometry and two LIDARs to RTAB-Map
Hi @matlabbe
I am using RTAB-Map with an Ouster OS0-64 LIDAR, RTAB-Map is subscribed to two topics: the point cloud and the IMU data from the LIDAR. I have two questions:
Question 1: We have a problem when the robot moves through in a long corridor, the icp_odometry node fails due to the low complexity of the point cloud geometry. To solve this problem, we want to add the robot's wheels odometry to RTAB-Map. I was looking around and saw that there are two ways:
The first one: set the frame of the wheel odometry as guess_frame_id parameter (this option feeds wheel odometry to icp_odometry as guess).
The second one: use the robot_localization ros package.
Which of these options is better?
Question 2: How can I use the point cloud of two LIDAR sensors in RTAB-Map? For the icp_odometry node I would only need the information from one LIDAR sensor, but for the mapping I need the information from both LIDARs.
Thank you in advance for your answer.
Question 1:
The issue you are describing is exactly the same stated in this paper Section 4.4.1.
The rtabmap way to deal with this situation is to use guess_frame_id with external odometry (generally computed with wheels or combined wheels+imu). As explained in paragraph starting with "Motion Prediction: " in section 3.1.2 of the same paper, when using an external guess, this will happen when a corridor-like environment is detected:
If the structural complexity of the current point cloud is lower than the fixed threshold “Icp/PointToPlaneMinComplexity”, only orientation is estimated with ICP and position (along the problematic direction) is derived from the external odometry. The structural complexity of a 2D point cloud is defined as the second Eigenvalue of the Principal Component Analysis (PCA) of the point cloud’s normals, multiplied by two. For 3D point cloud, the third Eigenvalue is used multiplied by three.
In other words: the displacement along the corridor will be taken from the wheel odometry, while the orientation will be corrected with ICP. This will avoid icp_odometry thinking the robot is not moving or poorly jump in corridor direction.
I don't have experience with robot_localization to deal with that kind of problem. The problem I see is how can you make sure that icp covariance is low in not structural complex environments so that the EKF filter can properly "discard" icp estimation along the corridor direction. It may be possible to do, but I guess the code estimating the ICP covairance of rtabmap would need to be modified to relate externally (implicitly in the covariance estimated) that low complexity has been detected along some axis.
Question 2:
You may use point_cloud_aggregator node to sync the two point clouds and adjust their relative pose based on odometry (i.e., fixed_frame_id=odom). The current downside is that if ray tracing is enabled for the occupancy grid, the ray tracing will be done from the base frame (the frame_id set for point_cloud_aggregator) instead of the respective lidar frames.