IMU + RGBD image drift
Hi,
I am currently setting up rtabmap in ROS with a RGB camera + 3d lidar sensor + IMU.
My problem is that the map is drifting after just a few seconds. See image below (supposed to be a wall on one line but it is curving down)
My launch file looks like this:
<?xml version="1.0"?>
<launch>
<arg name="frame_id" default="base_link" />
<arg name="rpi_camera" default="false" />
<arg name="base_folder" default="/app" />
<arg name="left_lidar_id" default="A03E-2530-2CC3-3A70" />
<arg name="right_lidar_id" default="9C3E-253B-2CC3-3A6C" />
<arg name="lidar_slave" default="false" />
<arg name="rosbag_path" default="" />
<arg name="record_rosbag" default="false" />
<arg name="robot_localization" default="false" />
<!-- Setup Lidar geometry and transforms -->
<param name="robot_description" textfile="$(arg base_folder)/launch/config/lidar-2.0.0.xml"/>
<node pkg="robot_state_publisher" type="robot_state_publisher" name="rob_st_pub" >
<remap from="robot_description" to="robot_description" />
<remap from="joint_states" to="different_joint_states" />
</node>
<!-- Nodelet managers -->
<node pkg="nodelet" type="nodelet" name="left_lidar_manager" args="manager" output="screen" />
<group ns="input">
<group ns="rosbag" unless="$(eval '' == rosbag_path)">
<node pkg="rosbag" type="play" name="player" output="screen" args="--clock $(arg rosbag_path)"/>
</group>
<group ns="record_rosbag" if="$(arg record_rosbag)">
<node pkg="rosbag" type="record" name="rosbag_record" args="record -o /app/data/rosbag /input/sensor/imu/data /input/sensor/imu/magnetometer /input/sensor/rpi_camera_v3_wide_384_216/camera_info /input/sensor/rpi_camera_v3_wide_384_216/image_raw /royale_cam_left_lidar/camera_info /royale_cam_left_lidar/depth_image_0 /royale_cam_left_lidar/point_cloud_0"/>
</group>
<group ns="sensor" if="$(eval '' == rosbag_path)">
<!-- Video input -->
<include file="$(find video_stream_opencv)/launch/camera.launch" >
<!-- node name and ros graph name -->
<arg name="camera_name" value="rpi_camera_v3_wide_384_216" />
<!-- means video device 0, /dev/video0 -->
<arg name="video_stream_provider" value="http://127.0.0.1:5000" />
<!-- set buffer queue size of frame capturing to -->
<arg name="buffer_queue_size" value="1" />
<!-- throttling the publishing of frames to -->
<arg name="fps" value="40" />
<!-- setting frame_id -->
<arg name="frame_id" value="elp_camera" />
<!-- camera info loading, take care as it needs the "file:///" at the start , e.g.:
"file:///$(find your_camera_package)/config/your_camera.yaml" -->
<arg name="camera_info_url" value="file:///app/launch/config/rpi_camera_v3_wide_384_216.yaml" />
<!-- flip the image horizontally (mirror it) -->
<arg name="flip_horizontal" value="false" />
<!-- flip the image vertically -->
<arg name="flip_vertical" value="false" />
</include>
<!-- Lidar input -->
<node pkg="nodelet" type="nodelet" name="left_lidar_driver" args="load royale_in_ros/royale_nodelet /left_lidar_manager" respawn="true" output="screen">
<param name="node_name" type="str" value="left_lidar"/>
<param name="royale_cam_id" type="str" value="$(arg left_lidar_id)" />
<param name="royale_use_case" type="str" value="Long_Range_30FPS" />
</node>
<!-- IMU -->
<node name="ros_imu_bno055_node" pkg="ros_imu_bno055" type="imu_ros.py" output="screen">
<param name="serial_port" type="string" value="/dev/ttyAMA0" />
<param name="frame_id" type="string" value="base_link" />
<param name="use_magnetometer" value="true" />
<param name="frequency" value="60" />
<param name="operation_mode" value="NDOF" />
<param name="reset_orientation" value="false" />
</node>
</group>
<group ns="processing">
<!-- Rectify camera feed -->
<node name="rectify_rgb" pkg="image_proc" type="image_proc">
<remap from="image_raw" to="/input/sensor/rpi_camera_v3_wide_384_216/image_raw"/>
<remap from="camera_info" to="/input/sensor/rpi_camera_v3_wide_384_216/camera_info"/>
<remap from="image_mono" to="/input/sensor/rpi_camera_v3_wide_384_216/image_mono"/>
<remap from="image_rect" to="/input/sensor/rpi_camera_v3_wide_384_216/image_rect"/>
<remap from="image_color" to="/input/sensor/rpi_camera_v3_wide_384_216/image_color"/>
<remap from="image_rect_color" to="/input/sensor/rpi_camera_v3_wide_384_216/image_rect_color"/>
</node>
<!-- Project depth image from lidar into view of RGB image -->
<node pkg="nodelet" type="nodelet" name="depth_gen" args="load rtabmap_util/pointcloud_to_depthimage /left_lidar_manager" respawn="true">
<remap from="camera_info" to="/input/sensor/rpi_camera_v3_wide_384_216/camera_info"/>
<remap from="cloud" to="/royale_cam_left_lidar/point_cloud_0"/>
<remap from="image" to="/input/sensor/rpi_camera_v3_wide_384_216/depth_image_0_gen"/>
<param name="fixed_frame_id" value="base_link" />
</node>
<node name="rectify_depth" pkg="nodelet" type="nodelet" args="load image_proc/rectify /left_lidar_manager">
<remap from="image_mono" to="/input/sensor/rpi_camera_v3_wide_384_216/depth_image_0_gen"/>
<remap from="camera_info" to="/input/sensor/rpi_camera_v3_wide_384_216/camera_info"/>
<remap from="image_rect" to="/input/sensor/rpi_camera_v3_wide_384_216/depth_image_0_gen_rect"/>
<param name="interpolation" value="0" />
</node>
<node name="imu_helpers" pkg="bv_lidar" type="imu_helpers.py" output="screen">
<remap from="imu/data_raw" to="/input/sensor/imu/data"/>
<remap from="imu/data" to="/input/sensor/imu/data_processed"/>
</node>
</group>
</group>
<!-- RTABMap -->
<include file="$(find rtabmap_launch)/launch/rtabmap.launch">
<!-- <arg name="scan_cloud_topic" value="/royale_cam_left_lidar/point_cloud_0" /> -->
<arg name="rgb_topic" value="/input/sensor/rpi_camera_v3_wide_384_216/image_rect_color" />
<arg name="camera_info_topic" value="/input/sensor/rpi_camera_v3_wide_384_216/camera_info" />
<arg name="depth_topic" value="/input/sensor/rpi_camera_v3_wide_384_216/depth_image_0_gen_rect" />
<arg name="imu_topic" value="/input/sensor/imu/data_processed" />
<arg name="wait_imu_to_init" value="true" />
<arg name="frame_id" value="base_link" />
<arg name="rgbd_sync" value="true" />
<arg name="rtabmap_viz" value="true" />
<arg name="gui_cfg" value="$(arg base_folder)/data/rtabmap_viz.ini" />
<arg name="rviz" value="true" />
<arg name="rviz_cfg" value="$(arg base_folder)/launch/config/rviz.rviz" />
<arg name="args" value="--delete_db_on_start \
--Rtabmap/DetectionRate 3 \
--Odom/FilteringStrategy 1 \
--Mem/UseOdomGravity true \
--Odom/ResetCountdown 1 \
--Rtabmap/StartNewMapOnLoopClosure true \
--Vis/MinInliers 10 \
--RGBD/ProximityMaxPaths 0 \
"/>
<arg name="queue_size" value="100" />
</include>
</launch>
You will also see a node named imu_helpers that normalizes the orientation of the imu and sets the covariance(It is originaly set to -1 on the first index of each covariance in the IMU msg). Below is the code:
#!/usr/bin/env python3
import time
import rospy
from sensor_msgs.msg import Imu, MagneticField
import math
def normalize(v, tolerance=0.00001):
mag2 = sum(n * n for n in v)
if abs(mag2 - 1.0) > tolerance:
mag = math.sqrt(mag2)
v = tuple(n / mag for n in v)
return list(v)
if __name__ == '__main__':
rospy.init_node("imu_helpers")
imu_orientation_quaternion_msg = Imu()
imu_orientation_quaternion_pub = rospy.Publisher("imu/data", Imu, queue_size=1)
def covariance_matrix(value):
return [
value, 0, 0,
0, value, 0,
0, 0, value,
]
def process_new_imu_msg(imu_msg):
quat_norm = normalize((
imu_msg.orientation.x,
imu_msg.orientation.y,
imu_msg.orientation.z,
imu_msg.orientation.w,
))
imu_msg.orientation.x = quat_norm[0]
imu_msg.orientation.y = quat_norm[1]
imu_msg.orientation.z = quat_norm[2]
imu_msg.orientation.w = quat_norm[3]
imu_msg.orientation_covariance = covariance_matrix(0.001)
imu_msg.linear_acceleration_covariance = covariance_matrix(1)
imu_msg.angular_velocity_covariance = covariance_matrix(0.1)
imu_orientation_quaternion_pub.publish(imu_msg)
rospy.Subscriber("imu/data_raw", Imu, process_new_imu_msg)
try:
while not rospy.is_shutdown():
time.sleep(1)
except rospy.ROSInterruptException:
pass
Do you have any idea why I can't achieve good results with rtabmap?
Can you share a rosbag of the following topics?
rosbag record /tf /tf_static \
/input/sensor/rpi_camera_v3_wide_384_216/image_rect_color \
/input/sensor/rpi_camera_v3_wide_384_216/camera_info \
/input/sensor/rpi_camera_v3_wide_384_216/depth_image_0_gen_rect \
/input/sensor/imu/data_processed \
/royale_cam_left_lidar/point_cloud_0
Here are some stuff that can go wrong:
- Low or different frame rate between the RGB and depth cameras
- Bad synchronization of the rgb and depth frames
- Bad TF between the RGB and depth cameras (how extrinsics have been computed?)
- ...
Yes, here you go: https://drive.google.com/file/d/1bxXn9blpWZCKAzm6dPpivV9nQd-p6DIQ/view?usp=sharing
I have only done manual calibration of the RGB and depth camera. Do you have a good library to do the extrinsic calibration?
Can you have an IR stream from the depth camera? If so, you may be do something like this: http://wiki.ros.org/openni_launch/Tutorials/ExtrinsicCalibrationExternal
It seems there is also a module in opencv to do that: https://docs.opencv.org/4.x/d2/d1c/tutorial_multi_camera_main.html
For the data in the rosbag, thetime difference between RGB and depth frames is sometime too large:
The time difference between rgb and depth frames is high (diff=0.133255s, rgb=1709536481.863311s, depth=1709536481.730056s).
It is a problem when the camera is moving, the depth image won't be correctly registered to color image. On Google Tango we had the same issue (depth image frame rate is 5 Hz and rgb camera is 30 Hz), but with Tango VIO computed at 30 Hz, we could correctly register the depth cloud to corresponding rgb frame.
The depth is not dense enough for feature extraction, either reduce the depth resolution by two (to make it more dense) or fill the holes:
For such small resolution, you can set GFTT/MinDistance to 3.
For reference, I tested the bag with:
roslaunch rtabmap_launch rtabmap.launch \
args:="-d --GFTT/MinDistance 3 --Rtabmap/DetectionRate 0" \
rgb_topic:=/input/sensor/rpi_camera_v3_wide_384_216/image_rect_color \
camera_info_topic:=/input/sensor/rpi_camera_v3_wide_384_216/camera_info \
depth_topic:=/input/sensor/rpi_camera_v3_wide_384_216/depth_image_0_gen_rect \
use_sim_time:=true \
frame_id:=elp_camera \
queue_size:=50
Thank you so much for such good support. I am already getting way better results by increasing the density of the density image.
One thing I am wondering about is why my odom frame is rotating compared to the map frame. It starts off really good where both odom and map frame are almost the same:
But it then suddenly rotates and shifts:
Shouldn't the IMU gravity constraint how the odom frame is allowed to move with regards to the map frame? No mather how bad the relation between odom and map is it seems like the relation between base_link and odom is correct(orientation and position).
Do you delete the database on each restart (argument --delete_db_on_start)? It looks like it is relocalizing on a previous session. Otherwise, map -> odom would change on loop closure detection. In rtabmap_viz you would see a green background on loop closure view when it happens. Another thing that could happen is if the base_link->IMU transform is wrong, the map optimizer may wrongly optimize the map with IMU constraints. You can turn off IMU optimization to compare if it is the case by setting --Optimizer/GravitySigma 0.