calib.txt and poses.txt files for multiple lidars
Hi. Thanks for the wonderful tool. I am working on a particular dataset which consists of 6 Lidars. I am fusing the pointclouds with respect to one lidar and obtaining fused pointclouds. Since I am not using the cameras for my application, I am considering the calib.txt file to be P0: 1 0 0 0 0 1 0 0 0 0 1 0 P1: 1 0 0 0 0 1 0 0 0 0 1 0 P2: 1 0 0 0 0 1 0 0 0 0 1 0 P3: 1 0 0 0 0 1 0 0 0 0 1 0 Tr: 1 0 0 0 0 1 0 0 0 0 1 0 (doubt: whether I should give the transformation matrix of reference lidar?) as mentioned in one of your previous issues. For generation of poses I used KISS-ICP and then generated poses file in KITTI format. Now should I again do the transformation with reference to lidar_ref and save that as poses.txt file or should I use the file generated from KISS-ICP only? I actually tried both the approaches but the constant problem is that the static objects are not overlapped properly. I have also checked the fused point cloud for the problem. But only when I load in point cloud labeler, I am facing the motion compensation issue. How do I solve this? How should my calib.txt file and poses.txt look like? Say I have 40 fused lidar scans for a particular scene.
Thanks in advance for your answer..!!
Let's untangle the problem:
-
It should not matter in which reference frame you are as long it's for all point clouds the same.
-
The only assumption that I make is that z points upward and x forward. Which will potentially make navigating the point clouds cumbersome if it's different.
Therefore one could use the Tr to account for having poses in a different coordinate system. In kitti the poses are given in a different coordinate system, therefor, I apply the following to place the point clouds at pose $T'_t$ using the given pose $T_t$ at time t from theposes.txt: $T'_t = \texttt{Tr}^{-1}\cdot T_t \cdot \texttt{Tr}$
So if your sensor coincides with the coordinate system, it should be fine and $\texttt{Tr}$ is just the Identity. -
I assume that the point clouds are already motion distorted, i.e., that the motion of the Lidar scanner between two scans is already integrated. If motion distortion is a problem then this has to be handled apriori.
I would suggest that you keep the order of the points if you change the individual points. Then the label files can also be used for the undistorted point clouds.
if you still have problems, then also a screenshot would help to diagnose the problem.
Hi @jbehley, Thanks for the quick reply. All the lidar scans are already integrated. So, as per the explanation I've put the output Tt' in poses.txt of point cloud labler. Tr is Identity and Tt is the pose from KISS-ICP. Is this right? or is there something I am missing?
I am attaching the screenshots for better diagnosis
This is how the complete scene looks like (The motion is compensated):
Single scan image
Steps that I followed:
-
In calib.txt file P0: 1 0 0 0 0 1 0 0 0 0 1 0 P1: 1 0 0 0 0 1 0 0 0 0 1 0 P2: 1 0 0 0 0 1 0 0 0 0 1 0 P3: 1 0 0 0 0 1 0 0 0 0 1 0 Tr: 1 0 0 0 0 1 0 0 0 0 1 0
-
For SLAM poses.txt file, I have generated poses from KISS-ICP. Then I used the above transformation mentioned and loaded that poses.txt in point_cloud labeler.
Kindly, let me know what steps to correct in this, so I can align the point clouds as expected. Thanks in advance for your answer..!!
are you sure that the scan poses are correct? The pose from the single can looks at some orientation, like side-ways looking.
Did you estimate the poses with the aggregated "bin" of the multiple scanners? Using simply the same files as used for the point_labeler (as KISS-ICP has a KITTI reader, this actually should simply estimate than the poses)?
Finally, use the add car points: true option in the settings to add points at the origin of the sensor (which I removed for KITTI to have the ego-vehicle points removed.) This explains the "rectangular" cut-out of the single point cloud.
Hi @jbehley, Thanks for the reply and idea. Yes I had estimated the poses with aggregated 'bin' files in SuMA as well as KISS-ICP. The problem was, I was still using wrong transformation matrix in calib.txt file instead of Identity. And when I rectified that, I was able to get the point clouds of the environment in the point cloud labeler without motion compensation.
I am attaching the images. The below provided image of the point cloud is very dense. Do you think this is how it should look like after uploading in point cloud labeler? Is there any process that can be done to make it look smooth?
Closer view of point clouds
The images are based on the poses estimated from SuMA. Now in the single scan view the rectangular cut out is not present after adding 'add car points: true'. Also the dynamic objects are not estimated by SuMA. Is there any reason for that? Kindly let me know your insights on the above questions and provided images.
Thanks in advance for your answer..!!
As a matter of fact I would not recommend using SuMa, as it's kind of outdated and probably cannot use all information; better go for KISS ICP or even better KISS SLAM that provides pose graph optimized poses with loop closures. I'm sure the KISS SLAM guys would be interested in some user experience reports.
The dynamics are not estimated, as it takes all points in the bin files. It's also clear that then the point clouds cumbersome if would not only require one pose but multiple ones for the motion of each individual object. I never thought of this...but it's an interesting avenue to estimate multiple poses... hmmm. 🤔
Anyway, the point clouds look good. I cannot tell anything about your pre-integration or the quality/accuracy of your calibration. that might also lead to some inaccuracies that you maybe mean by "smooth" point clouds.
Hi @jbehley, Thanks for the suggestion. And I have used KISS-SLAM to estimate the poses. But I don't see any changes that I know of. But in both the approaches I observe that the dynamic objects cast shadows throughout the whole point clouds and that creates some noise during the labeling. Is there any approach to overcome this issue?
Thanks in advance for your answer..!!
For dynamic object I used the following process:
- Ground removal (shortcut "R" with Page Up and Page down) to find a sufficient close value that removes the ground
- Label the dynamic parts, and then use the filtering to remove it.
- Label the remaining parts (wheels or something) of cars.
It often was for SemanticKITTI the case that we used to following procedure:
- Label all dynamic parts (polygon tool, Filtering, ...)
- Label easy to label parts like buildings (often with the polygon tool)
- Label all cars using the same process (remove ground, label points above ground, label remainders)
- Lastly label ground, sidewalk, other objects.
But you will develop your own strategy over time and depending on what you want to achieve in the end, it doesn't need to be super accurate (for benchmarking purposes, the quality should be higher, but for training something you might get away with leaving out some points)
Hi @jbehley, Thank you for the reply. The actual problem is with the dynamic objects. The moving object is not segmented and the point clouds are distributed throughout the trajectory as shown in the image.
This is how it looks like. Is there any way to overcome this issue?
Thanks in advance for your answer..!!
I think you don't understand: (1) either you filter the points before or (2) label it simply as we did for KITTI.
Not sure what you mean by "overcome" this issue.