dynamic_robot_localization
dynamic_robot_localization copied to clipboard
How do you get the correct TF?
Whenvever you find the TF from the lidar frame using the ICP:
icp.getFinalTransformation()
, getting the matrix and converting to ROS TF, how do you then proceed on publishing the Map to Odom TF? Is it like this?
-
final_transformation_as_tf = final_transformation_as_tf (the one you got from the matrix) * map_to_base_tf2 (from listening the TF);
- Then getting the relative TF
relative_tf2 = sensor_to_base_TF.inverseTimes(odom_to_base_TF)
- Finally
final_transformation_as_tf = final_transformation_as_tf * relative_tf2;
Is that correct?
Good morning :)
I do not perform point cloud registration in the lidar frame.
Instead I do it in the map frame and then apply the ICP corrections on top of:
new_base_link_to_map_transformation = icp_corrections_transformation * last_odom_to_map_transformation * base_link_to_odom_transformation
This is equivalent to perform the ICP registration on the sensor frame while providing an initial alignment / guess:
https://github.com/PointCloudLibrary/pcl/blob/master/registration/include/pcl/registration/registration.h#L425
using the transformation:
alignment_guess_sensor_to_map_transformation = last_odom_to_map_transformation * base_link_to_odom_transformation * sensor_to_base_link_transformation
This way ICP will only correct small errors of the odometry that occur since the last lidar scan that was processed, allowing it to converge much faster.
After having the new_base_link_to_map_transformation
, I compute the TF [odom -> map] using pose_to_tf_publisher.
new_odom_to_map_transformation = new_base_link_to_map_transformation * odom_to_base_link_transformation
-
last_odom_to_map_transformation
is from the pose estimation performed using the last lidar scan- When the system starts, its initialized to one of the options below:
- Identity matrix
- Computed using feauture matching
- Computed using principal component analysis (PCA) or ICP (if the corrections needed are small, otherwise ICP may not converge to a valid solution)
- Manual user input (using rviz for example)
- The initial pose is known (for example, the robot shuts down in a known charging station or the pose is loaded from a file, which have the last estimated pose when the system was running)
- When the system starts, its initialized to one of the options below:
-
base_link_to_odom_transformation
uses the timestamp of the lidar sensor data
I usually provide the robot pose in the base_link or base_footprint frame: https://www.ros.org/reps/rep-0105.html http://wiki.ros.org/navigation/Tutorials/RobotSetup/TF
In drl, the main steps that involve transforming the sensor data are:
- Convert the lidar data to point cloud using laserscan_to_pointcloud, which is configured to transform it to the odom frame using linear spherical interpolation for correcting laser distortion
- Transform the point cloud to the map frame within drl using the last estimated odom -> map transformation (from the last estimated pose of the robot)
- After ICP estimates the robot pose, I publish a geometry_msgs/PoseStamped specifying the 6 DoF robot pose in the map frame, which is received by
pose_to_tf_publisher and publishes the TF [odom -> map]:
transform_odom_to_map = transform_base_link_to_map * transform_odom_to_base_link
Have a nice day :)