autoware.universe
autoware.universe copied to clipboard
Evaluate control modules in Universe
Checklist
- [X] I've read the contribution guidelines.
- [X] I've searched other issues and no duplicate issues were found.
- [X] I've agreed with the maintainers that I can plan this task.
Description
Investigate performance of control pipeline in universe.
Purpose
Evaluate current control pipeline and confirm that it has enough features for Bus ODD.
Possible approaches
Create benchmark tools to test control modules (e.g. calculate offsets from trajectory) and evaluate control moduels
Definition of done
- [x] Design a benchmark tool to evaluate control modules (also check with Sim WG)
- [x] Run benchmark tools against control modules
- [ ] Create issues for missing features or bugs found from the results.
@mitsudome-r There is a package called control_performance_analysis in Control. I am confused, is purpose of this issue develop this package? Or, this package is not going to be used anymore so we need to develop a new package?
Mentioned package: https://github.com/autowarefoundation/autoware.universe/tree/main/control/control_performance_analysis
Here is my initial control benchmark tool proposal:
I wrote there possible outputs from observation and evaluation node
. The advantage of this architecture, we can test the all controllers with predefined noise level (noise's will be added as parameter) and we will be able to calculate controller errors by using ground truth data. Also, we do not have to run both (observation and evaluation node
and environment setup node
), we can run the tool with all autoware.universe stack without environment setup node
.
Moreover, we can monitor the driving criterias with respect to figure below:
I assumed that simulation will give the ground truth data but if it won't, still we can use it with estimated data (Maybe we need small changes).
Also, it can be used with autoware.universe/simulator/simple_planning_simulator
.
Design is here:
Until the simulator is ready, we can develop this architecture with simple_planning_simulator (by setting noises zero).
@TakaHoribe Could you explain what control_performance_analysis is used for, and what kind of metrics are used to generate the result.
This package computes; tracking error outputs:
- lateral error ey
- heading error eyaw
The tracking errors are computed by projecting current vehicle location on the current trajectory. In addition, the generated control vector u is used to compute an quadratic energy like value similar to we use in the MPC computations; uRu^T. We use the same kind of energy for the tracking’s errors; xPx^T.
The cost matrices R and P can simply be taken as unity matrices. However in the package we use an R and P matrices computed from stability LMI equations for a kinematic feedback controller.
In addition in the package we compute an approximate curvature for each waypoint on the trajectory.
Using the curvature, we can compute lateral velocity and acceleration.
We decided to modify the control_performance_analysis
package. I will add new functionalities to this package. New design is here:
I will create PR for this design. If you have any comment, please feel free to share with us.
@mitsudome-r control_performance_analysis package updated and merged into main. I am going to evaluate the current controllers, and I will update them if I see any improvements. Also, I am going to update the default parameters of controllers w.r.t. evaluation of default implementation of planning_simulator in tutorials.
Example usage of control performance analysis tool: https://github.com/orgs/autowarefoundation/discussions/412#discussioncomment-2947841
Tests were made in universe with commit sha: aa576c066adc93721e24403723291998a47109a0
with default parameter values.
To reproduce the tests:
- Download the scenario that used in my test: link.
- Run
scenario_simulator
with scenario above. - Run
control_performance_analysis
tool. - Run
plotjuggler
and import the layout. - After scenario is over, export the statistics by using CVS Exporter in plotjuggler.
Results: Driving status statistics:
mpc_follower | pure_pursuit | |||||
Series | Min | Max | Average | Min | Max | Average |
lateral_acceleration | -0.533191 | 0.64716 | 0.012966 | -0.403486 | 0.493776 | 0.014047 |
lateral_jerk | -4.017719 | 2.590968 | 4.3E-05 | -1.983593 | 2.388439 | 0.000311 |
longitudinal_acceleration | -0.623581 | 1.129492 | 0.010821 | -0.618914 | 1.160824 | 0.012287 |
longitudinal_jerk | -36.026505 | 40.491296 | -0.061777 | -27.96847 | 42.131553 | -0.061813 |
-
pure_pursuit
can drive the vehicle in smaller lateral acceleration and jerk values.
RMS Values of performance statistics:
Series | mpc_follower | pure_pursuit |
rms_heading_error | 2360.288684 | 2128.115518 |
rms_heading_velocity_error | 1029.922135 | 946.282237 |
rms_lateral_acceleration_error | 2685.29172 | 2356.478111 |
rms_lateral_error | 13429.126747 | 11978.168221 |
rms_lateral_velocity_error | 1895.337207 | 1647.820159 |
rms_tracking_curvature_discontinuity_ability | 30.440265 | 32.110788 |
- As you can see from results,
pure_pursuit
has less lateral and heading error. And also tracking_curvature_discontinuity ability is higher thanmpc_follower
. As a result, current implementation ofpure_pursuit
is better thanmpc_follower
while tracking the trajectories with default parameters.
@brkay54 Thank you for running the test.
Discussion from ASWG:
- @brkay54 to try more tests in different map with different path (e.g., with more curves)
- Also, providing screenshots of the tests might help us imagine what is causing the larger error in
mpc_follower
There are same updates about testing. There were two blockers for me.
-
Firstly, I realized that heading and lateral error were increasing so much in straight road. So I investigated that and I found that
control_performance_analysis
tool sometimes choose wrong nearest trajectory point. It was fixed with this PR. -
Secondly, I tried run the evaluation tool in different maps and with long time range. But PlotJuggler's buffer size is not enough to record all data. I opened issue for this.
For now, I am currently run this test on kashiwanoha_map because of second blocker (I can not collect the all data for long time period). I collected data in kashiwanoha_map, after some processes, I am going to share here.
Both MPC and pure_pursuit were tested in three different environment:
- sample_vehicle + kashiwanoha_map
- sample_vehicle + gebze_map
- isuzu_vehicle + gebze_map
Driving status
- Test - 1 sample_vehicle + kashiwanoha_map
Series | PP Min | MPC Min | PP Max | MPC Max | PP Average | MPC Average |
---|---|---|---|---|---|---|
lateral_acceleration | -0.403416 | -0.547744 | 0.481827 | 0.618049 | 0.020227 | 0.017232 |
lateral_jerk | -1.983906 | -4.221178 | 2.424854 | 2.450203 | -0.013859 | -0.002141 |
- Test - 2 sample_vehicle + gebze_map
Series | PP Min | MPC Min | PP Max | MPC Max | PP Average | MPC Average |
---|---|---|---|---|---|---|
lateral_acceleration | -0.502603 | -0.517804 | 0.18027 | 0.215238 | -0.040143 | -0.040282 |
lateral_jerk | -1.276513 | -1.565028 | 0.913313 | 1.330014 | -0.000146 | -0.000129 |
- Test - 3 isuzu_vehicle + gebze_map
Series | PP Min | MPC Min | PP Max | MPC Max | PP Average | MPC Average |
---|---|---|---|---|---|---|
lateral_acceleration | -0.779351 | -0.82164 | 0.271746 | 0.308302 | -0.062338 | -0.062065 |
lateral_jerk | -1.985067 | -2.340564 | 1.413154 | 1.961581 | -0.00036 | -0.000516 |
- MPC has higher lateral acceleration and jerk peak value than pure_pursuit
Performance variables
- Test - 1 sample_vehicle + kashiwanoha_map
Series | PP Min | MPC Min | PP Max | MPC Max | PP Average | MPC Average |
---|---|---|---|---|---|---|
heading_error | -0.443751 | -0.579112 | 0.402318 | 0.421394 | 0.002471 | -0.007 |
heading_error_velocity | -0.191558 | -0.203922 | 0.248491 | 0.376283 | 0.001939 | -0.000965 |
lateral_error | -0.381273 | -0.22516 | 0.439361 | 0.439361 | 0.009929 | 0.046452 |
lateral_error_acceleration | -0.424266 | -0.586141 | 0.674285 | 1.010908 | 0.011949 | 0.008286 |
lateral_error_velocity | -0.887937 | -0.905332 | 0.882841 | 0.917043 | 0.006962 | 0.007918 |
- Test - 2 sample_vehicle + gebze_map
Series | PP Min | MPC Min | PP Max | MPC Max | PP Average | MPC Average |
---|---|---|---|---|---|---|
heading_error | -0.355129 | -0.290287 | 0.152418 | 0.063587 | -0.005227 | -0.003886 |
heading_error_velocity | -0.102993 | -0.092189 | 0.111945 | 0.126462 | -0.000953 | -6.2E-05 |
lateral_error | -0.280567 | -0.289781 | 0.220632 | 0.222301 | -0.014059 | -0.00631 |
lateral_error_acceleration | -0.318318 | -0.268501 | 0.319817 | 0.354164 | -0.002302 | -0.002464 |
lateral_error_velocity | -0.819191 | -1.08276 | 1.149198 | 0.27054 | -0.014403 | -0.015096 |
- Test - 3 isuzu_vehicle + gebze_map
Series | PP Min | MPC Min | PP Max | MPC Max | PP Average | MPC Average |
---|---|---|---|---|---|---|
heading_error | -0.604754 | -0.188215 | 0.175775 | 0.04927 | -0.00514 | -0.004605 |
heading_error_velocity | -0.338885 | -0.397858 | 0.109483 | 0.12697 | -0.022434 | -0.021345 |
lateral_error | -0.30969 | -0.351939 | 0.231677 | 0.233544 | -0.014029 | -0.012137 |
lateral_error_acceleration | -0.915756 | -1.127259 | 0.489062 | 0.571718 | -0.078175 | -0.076123 |
lateral_error_velocity | -0.945196 | -1.023388 | 0.740805 | 0.217162 | -0.014718 | -0.018148 |
Performance variables total error (RMS)
- Test - 1 sample_vehicle + kashiwanoha_map
Series | PP | MPC |
---|---|---|
rms_heading_error | 555.11382 | 589.214804 |
rms_heading_velocity_error | 758.785943 | 862.506255 |
rms_lateral_acceleration_error | 1795.123815 | 1991.941973 |
rms_lateral_error | 1592.406027 | 1766.117106 |
rms_lateral_velocity_error | 1273.025241 | 1214.16156 |
rms_tracking_curvature_discontinuity_ability | 40.785818 | 40.586663 |
- Test - 2 sample_vehicle + gebze_map
Series | PP | MPC |
---|---|---|
rms_heading_error | 288.937329 | 196.057345 |
rms_heading_velocity_error | 272.279047 | 258.74408 |
rms_lateral_acceleration_error | 1154.30252 | 1120.854012 |
rms_lateral_error | 1261.378532 | 1106.977478 |
rms_lateral_velocity_error | 1194.883565 | 824.382134 |
rms_tracking_curvature_discontinuity_ability | 15.148316 | 15.160162 |
- Test - 3 isuzu_vehicle + gebze_map
Series | PP | MPC |
---|---|---|
rms_heading_error | 270.866384 | 210.666686 |
rms_heading_velocity_error | 687.926264 | 742.115233 |
rms_lateral_acceleration_error | 2811.365921 | 3094.668373 |
rms_lateral_error | 1249.054939 | 1289.547955 |
rms_lateral_velocity_error | 1103.374622 | 890.818035 |
rms_tracking_curvature_discontinuity_ability | 15.135825 | 15.161382 |
Results Plotted (Lateral Error - Heading Error - XY Pose of vehicle)
- Test - 1 sample_vehicle + kashiwanoha_map
- Test - 2 sample_vehicle + gebze_map
To visualize the results
- Download the data: alltests.tar.gz
- Open PlotJuggler and import the data
- Now, you can see the series at left side of window. You can plot the data.
- To plot vehicle position for debugging, Select two curves x and y from vehicle kinematic state (keeping the CTRL key pressed) and Drag & Drop them using the RIGHT Mouse button.
My opinion
There is no big performance difference I think. Pure Pursuit has better performance in kashiwanoha_map than MPC. However, MPC has better performance in Gebze map. I think Pure Pursuit can better handle the sharp turns than MPC but generally MPC has better performance.
@brkay54 Thanks for the report. Please create an instruction on how to use control evaluator tool to Autoware Documentation https://autowarefoundation.github.io/autoware-documentation/main/how-to-guides/, and we can close this issue.
@mitsudome-r I added instruction documentation in PR. We can close the issue now.
@brkay54 Thanks for all your work! I'm closing this issue.