autoware.universe
autoware.universe copied to clipboard
Add a new tool to measure end-to-end delay of the Autoware for sudden obstacles
Checklist
- [X] I've read the contribution guidelines.
- [X] I've searched other issues and no duplicate issues were found.
- [X] I've agreed with the maintainers that I can plan this task.
Description
In Autoware, accurately detecting the first response to sudden obstacles in various topics is challenging. There's a need for a tool that can create a test environment, listen to critical topics, and measure the system's reaction across different pipelines including Sensing, Perception, Planning, and Control.
This is a follow-up from:
- https://github.com/autowarefoundation/autoware.universe/issues/5540
Purpose
The purpose of this tool is to enhance the reliability and responsiveness of Autoware by providing a means to measure and analyze the delay in reacting to sudden obstacles. This will help in optimizing the system's performance across all stages of its pipeline, ensuring safer autonomous driving capabilities.
Possible approaches
- Develop a simulation environment within Autoware that can dynamically spawn obstacles.
- Implement a listener module that subscribes to key topics (the topics such messages have been publishing: PredictedObjects, DetectedObjects, TrackedObjects, PointCloud2, Trajectory, AckermannControlCommand) to monitor and log reactions.
- Design a measurement tool that calculates the time delay from obstacle-spawned time to the generation of a control command.
- Incorporate a reporting mechanism to visualize and analyze the reaction times across different modules.
Definition of done
- [x] A simulation environment capable of spawning sudden obstacles is implemented.
- [x] A module for subscribing and listening to the necessary topics is developed.
- [x] The tool can accurately measure reaction times across Sensing, Perception, Planning, and Control pipelines.
- [ ] A comprehensive report detailing the reaction times and suggesting areas for optimization is produced.
- [x] Merge the PRs that solve the issue: https://github.com/autowarefoundation/autoware.universe/issues/6255
- [x] https://github.com/autowarefoundation/autoware/pull/4212
- [x] https://github.com/autowarefoundation/autoware_internal_msgs/pull/1
- [x] https://github.com/autowarefoundation/autoware.universe/pull/6440
- [x] https://github.com/autowarefoundation/autoware.universe/pull/6490
- [ ] Merge the PRs to close this issue.
- [x] https://github.com/autowarefoundation/autoware_launch/pull/886
- [x] https://github.com/autowarefoundation/autoware.universe/pull/6382
- [ ] https://github.com/autowarefoundation/autoware.universe/pull/5954
create a test environment, listen to critical topics, and measure the system's reaction
It would be nice to write down these topics of interest, valid test conditions, and acceptable/unacceptable values from such measurements. For example,
- If the value in
/namespace/topic_name
is greater than0
, then/namespace/another_topic_name
will be greater than5
within10
milliseconds. - Given
/namespace/topic_name
equals toENUM_VALUE
and/namespace/another_topic_name
is greater than20
, then/namespace/topic3
is alwaystrue
.
What could be other things/constructs you find useful in writing such sentences for such analysis?
Hi @doganulus thank you for the comment. Currently, we are working on merging this package. However, in the coming days, I am planning to add a detailed report on this issue or show and tell (TBD).
Actually, in this report, I am planning to show 2 different kinds of tests for each node:
-
1 - Default Time Consumption: In this test, I am going to show the processing times of each message in each pipeline by using
published_time_publisher
. -
For more information please see: #6255
This will show us how much time each node takes when it is executed. I am going to list the average calculation times of each node in the Sensing, Perception, and Planning pipelines.
-
2 - Reaction Response Time: In this test, I will measure the reaction times when an obstacle is spawned suddenly in the Perception, Planning, and Control pipelines.
Reaction Time = (published time of the first reacted message) - (spawn time of the object)
Here are the topics of interest for Reaction Response Time tests:
- For Planning & Control pipeline:
obstacle_cruise_planner (or obstacle_stop_planner):
topic_name: /planning/scenario_planning/lane_driving/trajectory
message_type: autoware_auto_planning_msgs/msg/Trajectory
scenario_selector:
topic_name: /planning/scenario_planning/scenario_selector/trajectory
message_type: autoware_auto_planning_msgs/msg/Trajectory
motion_velocity_smoother:
topic_name: /planning/scenario_planning/motion_velocity_smoother/trajectory
message_type: autoware_auto_planning_msgs/msg/Trajectory
planning_validator:
topic_name: /planning/scenario_planning/trajectory
message_type: autoware_auto_planning_msgs/msg/Trajectory
trajectory_follower:
topic_name: /control/trajectory_follower/control_cmd
message_type: autoware_auto_control_msgs/msg/AckermannControlCommand
vehicle_cmd_gate:
topic_name: /control/command/control_cmd
message_type: autoware_auto_control_msgs/msg/AckermannControlCommand
- For Perception pipeline:
common_ground_filter:
topic_name: /perception/obstacle_segmentation/single_frame/pointcloud_raw
message_type: sensor_msgs/msg/PointCloud2
occupancy_grid_map_outlier:
topic_name: /perception/obstacle_segmentation/pointcloud
message_type: sensor_msgs/msg/PointCloud2
multi_object_tracker:
topic_name: /perception/object_recognition/tracking/near_objects
message_type: autoware_auto_perception_msgs/msg/TrackedObjects
lidar_centerpoint:
topic_name: /perception/object_recognition/detection/centerpoint/objects
message_type: autoware_auto_perception_msgs/msg/DetectedObjects
obstacle_pointcloud_based_validator:
topic_name: /perception/object_recognition/detection/centerpoint/validation/objects
message_type: autoware_auto_perception_msgs/msg/DetectedObjects
decorative_tracker_merger:
topic_name: /perception/object_recognition/tracking/objects
message_type: autoware_auto_perception_msgs/msg/TrackedObjects
detected_object_feature_remover:
topic_name: /perception/object_recognition/detection/clustering/objects
message_type: autoware_auto_perception_msgs/msg/DetectedObjects
detection_by_tracker:
topic_name: /perception/object_recognition/detection/detection_by_tracker/objects
message_type: autoware_auto_perception_msgs/msg/DetectedObjects
object_lanelet_filter:
topic_name: /perception/object_recognition/detection/objects
message_type: autoware_auto_perception_msgs/msg/DetectedObjects
map_based_prediction:
topic_name: /perception/object_recognition/objects
message_type: autoware_auto_perception_msgs/msg/PredictedObjects
These are the topics I am going to measure the reaction times. Please feel free to share any suggestions or comments with me!
Thank you @brkay54. I will wait for your report. Looks very useful.
This pull request has been automatically marked as stale because it has not had recent activity.
I created the following document by using the reaction_analyzer before. The tests marked by yellow color indicate more delay caused by this issue.
- Document: GoogleDocs