autoware.universe
autoware.universe copied to clipboard
The distortion corrector package doesn't compensate for all motion distortion
Checklist
- [X] I've read the contribution guidelines.
- [X] I've searched other issues and no duplicate issues were found.
- [X] I've agreed with the maintainers that I can plan this task.
Description
The current implementation of the distortion corrector package only takes into account linear speed x from twist messages and just yaw rate from the IMU. But the current solution doesn't provide a motion-compensated point cloud in different cases like going up and down speed bump, ego roll angle changes when the vehicle turning etc.
Purpose
Compensate all motion distortion from lidar point cloud.
Possible approaches
Solution 1:
In the current sensor setup, we possess high-frequency IMU data. This enables us to calculate orientation variances between the timestamps of the point cloud (the timestamp of the initial scan) and the timestamps of individual points. Additionally, utilizing a localization stack (EKF), we can determine the vehicle's displacement during this interval. We can then implement a reverse transformation from the pseudo sensor base frame (imu orientation + displacement that comes to EKF) to the sensor base frame.
Solution 2:
Improve the current algorithm by just adding the pitch rate.
Limitation: We have just linear x that comes from the twist message.
Definition of done
- [ ] A possible solution was decided
- [ ] Distortion corrector package refactored
Please feel free to share your ideas
FYI: @drwnz @miursh @xmfcx
@vividf was working on things related to this
Hi @vividf , what's your current plan?
Hi, @kaancolak,
What I did before was to utilize the information from twist (linear xyz, but just as you said we only have linear x from twist) and imu (angular xyz) to compensate for the pointcloud in the sensor_frame (lidar_frame), not in the base_link.
- Apply an adjoint map to transform the twist from the baselink to the sensor frame.
- Apply a rotation matrix to transform the angular velocity from imu to the sensor frame.
- Replace the angular velocity of twist to imu.
- Apply the exponential map with the time_offset to estimate the motion in the period.
- Get the undistorted point by multiplying the transformation matrix with the distorted point.
The reason why we didn't use this algorithm is because it almost doubles the time.
Thank you for the clarification @vividf . I apologize for the delayed response, I haven't had the time to work on this issue yet due to my other tasks.
I understood your approach. In the current implementation of lidar distortion corrector has difficulty solving complex situations(using just yaw rate) that are mentioned in the issue. Maybe can we add this solution as an optional in the distortion corrector package?
@kaancolak Sure, I will create a new branch and add the implementation for you guys to test the performance.
Thank you @vividf , I re-assigned this issue to you. (cc. @xmfcx ) , before we created this issue, we talked with Fatih and planned to implement a similar approach to yours(or use a different displacement resource). However, you've already put in some work on it. Let's test the outcome. If it performs satisfactorily, maybe we can try optimizing certain aspects to address the processing time concern.
@kaancolak Please use this branch (https://github.com/autowarefoundation/autoware.universe/tree/feat/3d_distortion_corrector) to test whether it works for your case.
This branch is a bit different from what I implemented before (undistorted in baselink instead of undistorted in sensor frame). The speed should be faster. Hope this can help you!
Hello @vividf . Thanks for your work. I tested it.
First of all I check distortion corrector input point cloud and output point cloud for understanding difference coming with your branch. I did this test by viewing the moments when it passed through speed bumps.
Without 3d distortion corrector branch:
Blue pc : Input of the distortion corrector
White pc : Output of the distortion corrector
We can see here pc which is output of the distortion corrector does not changes in direction z
With 3d distortion corrector branch:
Blue pc : Input of the distortion corrector
White pc : Output of the distortion corrector
We can see here pc which is output of the distortion corrector is changes in direction z. After seeing this change, I tested it with ground segmentation to see if it provided an improvement.
You can see below that ground segmentation adds ground points to non-ground points when passing through speed bump before the changes occur. Before 3d distortion corrector (ground segmentation test) Video link is here
I observed that this error did not occur after checkout the 3d distortion corrector branch. After 3d distortion corrector (ground segmentation test) Video link is here
Which method did you use to test it? Apart from this, if there is a method I can test, I can test it. As a result of these tests, I believe that your branch offers an improvement. Are you planning to create a PR for this branch and add it to autoware?
Note: By default, the distortion corrector node uses /sensing/vehicle_velocity_converter/twist_with_covariance
topic as input. However, the angular velocity field of this topic appears empty. However, I saw that you also used these fields in the code. That's why I ran the tests with /localization/twist_estimator/twist_with_covariance
topic
@vividf please create the PR from the branch, since it is improving the autoware performance.
@meliketanrikulu @xmfcx Thanks for testing! I created a draft PR https://github.com/autowarefoundation/autoware.universe/pull/7031, could you test them again to make sure there are no issues and also provide the result again to the PR?
Thanks!
Related PR is merged. --> https://github.com/autowarefoundation/autoware.universe/pull/7137 We can close this issue. Thanks for your work @vividf