autoware.universe
autoware.universe copied to clipboard
Ground segmentation fails for points behind the low objects
Checklist
- [X] I've read the contribution guidelines.
- [X] I've searched other issues and no duplicate issues were found.
- [X] I'm convinced that this is not my fault but a bug.
Description
The point clouds of the ground, which are behinds the small objects, are sometimes segmented as a point cloud of the object.
This is the example of issue, which is in front left area of the ego-vehicle, generated by using sample rosbag.

Expected behavior
Only roadside pole is classified as objects and no other ground points are classified as objects.
Actual behavior
The ground points behinds the roadside pole are classified as objects.
Steps to reproduce
- Rosbag replay simulation
- It happens at timestamp of 1585897257.06
Versions
No response
Possible causes
No response
Additional context
No response
you can try to modify split_points_distance_tolerance in ground_segmentation.param.yaml, or radial_divider_angle_deg in the scan_ground_filter_nodelet.cpp.
We have discussed this in ASWG.
@piotr-zyskowski-rai said that he was able to reproduce the error. His suggestion is to modify a parameter as suggested by @plane-li, but we might want to have rosbag from different cases so that we can confirm the change is valid for other use cases.
@mitsudome-r I think it's a good idea here not to play around with parameters manually and assess them visually, but to introduce some simple metrics that will give an insight about the overall quality of the segmentation on all detected clusters in the dataset. With such metrics, parameters could be tuned automatically and we will make sure that by improving one case we don't brake others.
We could develop node for calculation of such metrics in this task .
My initial though is to use something like 3D compactness metric to detect artifacts like shown in this issue, and additionally some metric related to distance between clusters to detect if single object was split into several small objects.
@mitsudome-r Also, with a limited effort, we could provide ground truth point cloud segmentation functionality in lidar simulation. It would make it possible to create a metric comparing segmented by autoware with ground truth information
@mitsudome-r @piotr-zyskowski-rai It would be perfect to have GT segmentation information in the simulator, to properly tune the parameters of segmentation algorithms. If similar sensor to the one implemented in CARLA could be added to TierIV SIM it would be perfect for improving 3D segmentation algorithm
Here is the link for the lidar data, created from the simulator - semantic/instance segmentation point cloud data are included.
Link for the demo lidar segmentation data, created from MORAI SIM: Drive
In case more data are required for research, MORAI can provide that.
To test the data I have extracted only the segmentation part of the autoware launch system and created a separate launch file to run it.
I played around with split_points_distance_tolerance in ground_segmentation.param.yaml file and was able to reduce the size of the tail
Before:
After:

@sglee-morai Thanks for the rosbag. It will be very helpful. For the purpose of this task, it would be great to have more small objects in the scene, like the poles that are shown in the example in the task description. If you could provide such rosbags along with segmentation information that would be great. Thanks!
I was able to run provided rosbag with segmentation launched with the launch I prepared. I used the following steps to do that:
- Run the launch:
ros2 launch tier4_perception_launch perception_ground_seg.launch.xml
- The point cloud data's base frame is
lidarinstead ofbase_link, I ran static transform publisher:
ros2 run tf2_ros static_transform_publisher 0 0 5 0 0 0 1 base_link lidar
5m translation along z-axis is caused by the fact that it seems that ground level is different from what is expected by autoware, and without it everything was treated as not-ground.
- Run the bag with remapping to comply with the segmentation node:
ros2 bag play lidar_intensity/ --remap /velodyne/points_xyzi:=/perception/obstacle_segmentation/range_cropped/pointcloud

There is some inaccuracy in segmentation whose origin I do not understand, but I will look into this. At the same time they are pretty far away so I am not sure we should be worried about that:

@piotr-zyskowski-rai Thanks a lot for the feedback! I'll create another rosbag created as you suggested and share that with you.
About what you mentioned:
"There is some inaccuracy in segmentation whose origin I do not understand, but I will look into this. At the same time they are pretty far away so I am not sure we should be worried about that"
That was maybe due to an error in the data. I used 32ch Lidar model to create the rosbag, but 32ch lidar data seems to be weird at that moment. I can create another version with 16 ch version and take a look at if the same happens.
Besides, you can use an evaluation version of MORAI SIM: Drive for this purpose so that you will be able to create a scene as you wanted. If you are interested, please let me know. Below is the simple document that I created for AWF, but it may not be detailed enough for your purpose.
Evaluation Version of MORAI SIM: Drive
Link for the Home Documentation (Created especially for AWF)
https://morai.atlassian.net/wiki/external/1098547535/MTJjMDRmZDlhZDhiNDI2YzhkNzgzMWJiMjNiYTYxMzc
IMPORTANT NOTE
Please note that this evaluation version has some issues. (e.g. NPC fails to follow its path when the scenario is loaded again) Schedules for the updated version release is under discussion
@piotr-zyskowski-rai Sorry that me getting late for sharing another rosbag file.
My intention was to update MORAI SIM: Drive first and then record another one, but updating took quite a delay which was unexpected. Now I have updated the MORAI SIM successfully and I'm now creating some scenes, which would look something like this. I think I can finish creating some bag files quite soon.

@piotr-zyskowski-rai
Hello Piotr, if you'd like you can use these rosbag files.
Here's the link to the download page: https://morai.atlassian.net/wiki/external/1133707265/OTQ5NTAxNjYxMjYwNDYxY2I1ZDJjNTFlNzBlYjcwYWU
Before downloading and checking that out, you can simply check whether these might suit your needs. You can see there are three rosbag files,
- Normal Camera + Normal Lidar (VLP-16)
- Normal Camera + Semantic Segmentation Lidar (VLP-16)
- Semantic Segmentation Camera + Normal Lidar (VLP-16) https://www.youtube.com/watch?v=Juarz5JVrzA
The reason why I used VLP-16 instead of the previous one (I remember it was HDL32) is to avoid the bug mentioned earlier.
In case the point cloud density is too low, I can provide another rosbag created with different models. I'm thinking of creating one with OS-1 64 ch model in the MORAI SIM: Drive, but since I haven't run it with ROS2 driver provided by Ouster, I'm not really sure how the result is gonna be at this moment.
@sglee-morai Thanks a lot for the rosbag! I'll look into it soon.
@sglee-morai I ran the bags, they look very good. Thanks a lot! I started to prepare processed + ground truth data set for metrics development.
@piotr-zyskowski-rai will assign someone else to take this over.
Current status of the task: I am writing a node to extract ground truth point clouds of ground and objects, using a ROS bag provided by MORAI as it has the ground truth information available. In parallel I implemented a simple segmentation metric to use for validation of this task.
Related issue: https://github.com/autowarefoundation/autoware-projects/issues/11
I have finished a node for extraction of ground truth point clouds of ground and objects (https://github.com/djargot/pcl_div). Using the node, I extracted data from ROS bag provided by MORAI and used segmentation metrics (#1600) to calculate quantitative results of ground segmentation quality.
Example results from segmentation metrics for a single point cloud:

As can be seen, it look like for this particular ROS bag ground is removed perfectly, in a sense that there are no false negatives. There are some false positives, which is to be expected in a scenario with perfect precision.
For the task and the problem stated at the very beginning, it would be perfect to have labeled ROS bag where the problem occurs. In another case, only qualitative analysis can be done that could lead to some unwanted/unexpected behavior (e.g. increasing the number of false negatives significantly).
By playing with following ground segmentation parameters: split_points_distance_tolerance and split_height_distance I was able to remove the tail. I also ran the modified ground segmentation with the ROS bag provided by MORAI and it looks ok with visual assessment. However, I still need to check thoroughly whether the general performance of the algorithm is not worsened with the modified parameters.
Before:

After:

I checked different parameters that i proposed and also used another ground segmentation method as suggested here: https://github.com/autowarefoundation/autoware.universe/pull/1866#issuecomment-1252537551 ray_ground_filter_nodelet instead of scan_ground_filter_nodelet for ground segmentation. It appears that the ray_ground_filter_nodelet method works better and does not lead to tail behind the pole.
Also using segmentation metrics node I got quantitative results using labeled point cloud provided by MORAI. The numbers below are cumulative results from 748 labeled point clouds from the aforementioned ROS bag.
The first two numbers correspond to original scan_ground_filter_nodelet and the one with changed split_points_distance_tolerance and split_height_distance as described in my previous comment. The third number is obtained with ray_ground_filter_nodelet.
Accuracy: 88,42 --> 89,00 | 89,38
Recall: 56,94 --> 59,16 | 60,12
Precision: 99,67 --> 99,63 | 99,60
scan_ground_filter_nodelet:

ray_ground_filter_nodelet:

Final thought would be to switch to ray_ground_filter_nodelet if the problem occurs.
@djargot We have faced the same problem on the BUS ODD project, and changed the ground remover algorithm to ray_ground_filter_nodelet, it looks working better.
I have checked that https://github.com/autowarefoundation/autoware.universe/pull/1899 seems to remove the problem for scan_ground_filter_nodelet.
@djargot Are there any other remaining issues? If not, I would like to close this issue. (and close https://github.com/autowarefoundation/autoware.universe/pull/1866 if it is not needed anymore)
@djargot friendly ping
@mitsudome-r I am very sorry for the late reply. I can confirm that with current version of ground segmentation the problem does not occur. So this issue can be closed as resolved with #1899
@mitsudome-r I am very sorry for the late reply. I can confirm that with current version of ground segmentation the problem does not occur. So this issue can be closed as resolved with #1899