LeGO-LOAM icon indicating copy to clipboard operation
LeGO-LOAM copied to clipboard

Accuracy of using non-uniform lidar data distribution

Open robertsenputras opened this issue 3 years ago • 3 comments

Hi everyone,

Have anyone here tried to pass non-uniform lidar data such as RS-Lidar 32 data (Robosense) that have this kind of distribution?

RS-Lidar 32 distribution

What do you think about making a range image from this kind of data? Should I variate the vertical angular resolution for each vertical scan, or we can force it to have the same angular resolution? I have a concern about the accuracy of the SLAM. I saw that both method has different range image representationshown in the image below. image Top: same angular resolution, bottom: different.

Are you guys have any idea how to pass this kind of lidar data? And how about the accuracy difference?

Thank you.

robertsenputras avatar Apr 27 '22 05:04 robertsenputras

You can implement your own parameters matching your LiDAR in the following file

L-Reichardt avatar Jul 27 '22 08:07 L-Reichardt

Thanks for your answer.

I want to ask a follow-up question about this. Since imageProjection.cpp produces 2D range image and it is used by the feature extraction to extract edge and surface points, will it affect the performance of the map creation? I notice, if I register the uneven lidar distribution with laser id, I got more-more surface and edge points whereas if I register it using the linear equation (like the one inside imageProjection), not all the lidar data are assigned to the range image so it has less surface and less edge. Here's the picture.

  • Laser ID: Laser ID mapping

  • Linear Equation: Linear Equation

note: I'm still use 2D matrix which has dimensions of **num_channelnum_columns**.

What should I use then? will more surface points increase the surface feature matching performance while the lidar data distribution is not linear? I'm afraid, matching is much worse

robertsenputras avatar Nov 29 '22 03:11 robertsenputras

@robertsenputras The sensor I am working with has a uniform vertical angle, so I do not have experience with your problem. Also I am not familiar with the exact methods for feature extraction inside LegoLoam. My gut feeling is that having empty pixels inside the range image (bottom range image) could be detrimental to the feature extraction. Skimming over the paper, it looks like LegoLoam was designed with sensors that have a uniform distribution. My best guess is that a same angular resolution range image (top image) would work best, however its probably best tested through trial and error.

L-Reichardt avatar Nov 29 '22 08:11 L-Reichardt