velodyne
velodyne copied to clipboard
Adding more precise scan separation feature
Hi all. I'm working on some mapping projects at MapIV, Inc. (TierIV's subsidiary company). During the projects, I found a way to improve the current Velodyne ROS driver, so my team and I think it might be worth sharing here. If I'm missing something or I got something wrong, I would be more than happy to hear from you all :)
It seems to me that the current implementation only allows creating one full scan from a set of complete packets, say 76 packets/scan or 77 packets/scan. But sometimes, one full scan is actually equivalent to some complete packets and a fraction of it, for example, 76.3 packets/scan. In this case, rounding the number of packets/scan up or down will result in redundant points or missing points in a scan, respectively (see the figures below).
Redundant points
Missing points
So, what I did was basically modifying the driver to separate each scan more precisely. The separation mainly relies on azimuth data each point was taken (estimated using the azimuth of the packet and the firing timing). If the azimuth differences of three consecutive points have different signs, it is assumed that the last point belongs to a new scan. Therefore, it is stored in a residual point cloud waiting to be merged to the next scan. The code for this can be found here.
Also, it is important to ensure that the number of packets in velodyne_msgs::VelodyneScan is more than the final exact packets/scan. For example, if the final number of packets/scan is 76.3, velodyne_msgs::VelodyneScan needs to contain 77 packets in total. The code for this part can be found here.
Lastly, I also use the estimated timestamp of the last point in a scan as the scan timestamp instead of using the last packet timestamp as in the current implementation or the first packet as mentioned here.
Video comparing current driver and the modified driver
Limitations
- Only works with VLP16 and VLP16-Hires
- Only works with unsorted Point Cloud
- Might be slower than the current implementation as it includes more processing
Related discussions
- https://github.com/ros-drivers/velodyne/issues/280
- https://github.com/ros-drivers/velodyne/issues/242
- https://github.com/ros-drivers/velodyne/issues/231
@n-patiphon - It looks like CI is currently failing due to some linting errors. If you are using python-catkin-tools
to build the package, please run catkin build velodyne_driver velodyne_laserscan velodyne_pointcloud --no-deps --make-args roslint
and fix the resulting errors. If you are not using python-catkin-tools
, run catkin_make roslint_velodyne_driver roslint_velodyne_laserscan roslint_velodyne_pointcloud
.
Also, @n-patiphon, I believe the same thing can be achieved through the cut_angle
feature. There might be a small bug in it right now but try the version of the driver from master
and hard-code the cut_angle
parameter to 2*pi somewhere around line 137 of velodyne_driver/src/driver/driver.cc
. See if this gives you the same result.
Hi @JWhitleyAStuff , Thank you for your suggestions. I already fixed the linting errors.
As for the cut_angle
parameter, I am aware of this implementation. But, as far as I understand this option still separates scans at a packet-level because it uses azimuth from a packet header. This header azimuth is the angle of the first return in the first firing sequence of that packet. So, it will still have the missing/redundant points problem I mentioned earlier. On the other hand, the implementation in this PR separates scans at a point-level because it uses interpolated azimuth of each point to decide whether they are in the same scan or not.
Besides, I went and rechecked the current master. As you suggested, I hard-coded the cut_angle
to 2PI and ran the driver. These are the results.
As you can see, the current master created a vertical dense patch shown in the red circle while this PR implementation did not.
@n-patiphon - You mentioned that this "might be slower" due to more processing. Can you possibly do a simple CPU load test while running the node with and without these modifications and report the results? Also, what else would be required to apply this logic to the other models?
@n-patiphon - There are also conflicts with the current master
. Please rebase.
@n-patiphon This is a very nice contribution. Thank you.
Do you have any update about the CPU usage and impact on measurements timing?
Hi, I just looked into the code. I would suggest to just add another container which filters by using the phi angle (with respect to the spherical coordinate system). The container could buffer the extra points and add them - if necessary - in the setup method to the next cloud. I think this should be a much more elegant way and it should not consume more processing time / power. Or, am I missing something? Cheers
@spuetz or @YoshuaNava Would either of you be interested in continuing the implementation of this feature? I'm not sure that MapIV is still a thing (my contact at TierIV says he hasn't heard anything about them for quite some time).
Should be easy, I could do that in a couple of weeks. But I think I'll restart with the approach I suggested above.
Should be easy, I could do that in a couple of weeks. But I think I'll restart with the approach I suggested above.
Once you have a new PR ready, I'll close this one.
I wait with this for #347 and #335 to be merged.
This modification is great, with this modification i get a predictable motion distortion. With original implementation i get an unpredictable motion distortion, which is resulted from the changing of beginning time of scan. I hope this modification could soon be available by VLP-32C.
@n-patiphon @spuetz Pings all around.