datadog-agent icon indicating copy to clipboard operation
datadog-agent copied to clipboard

pkg/trace: add e2e benchmark

Open knusbaum opened this issue 1 year ago • 2 comments

What does this PR do?

This PR adds an e2e benchmark suite for the Trace Agent. It sets up a live agent and runs generated traces through the receiver and out of the senders to a dumb intake.

Motivation

These tests might provide high-level insight into the effects of changes that aren't covered by more specific benchmarks. It fills a minor gap between the micro benchmarks here and the full-scale benchmarks run against the trace agent binary.

Additional Notes

There are a lot of benchmarks here and probably not all of them are useful. It would be nice to have a subset run on PRs and possibly delete the ones we don't think are useful, or mark them as skip so they could be run manually if we think they still have some value.

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

Run the benchmarks locally and compare with benchstat. This is already complete.

knusbaum avatar May 28 '24 16:05 knusbaum

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 59.63%. Comparing base (c56965f) to head (6888b2a).

Additional details and impacted files
@@                        Coverage Diff                        @@
##           andrew.glaude/dev-trace-chans   #26028      +/-   ##
=================================================================
- Coverage                          61.74%   59.63%   -2.11%     
=================================================================
  Files                                237      197      -40     
  Lines                              20797    17836    -2961     
=================================================================
- Hits                               12841    10637    -2204     
+ Misses                              7399     6724     -675     
+ Partials                             557      475      -82     
Flag Coverage Δ
amzn_aarch64 59.55% <ø> (-2.53%) :arrow_down:
centos_x86_64 59.60% <ø> (-2.53%) :arrow_down:
ubuntu_aarch64 59.62% <ø> (-2.51%) :arrow_down:
ubuntu_x86_64 59.62% <ø> (-2.54%) :arrow_down:
windows_amd64 ?

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar May 28 '24 16:05 codecov[bot]

Regression Detector

Regression Detector Results

Run ID: d8c57d91-133e-4034-9aae-d66159cca7d1 Baseline: 55cf7d8df6921c0e7521453a3dafb75487cad7aa Comparison: 6888b2aebabcd80490bac3ce676d6e2e63201f1a

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
pycheck_1000_100byte_tags % cpu utilization +2.26 [-2.29, +6.81]
tcp_syslog_to_blackhole ingress throughput +0.49 [-21.23, +22.21]
idle memory utilization +0.19 [+0.15, +0.23]
uds_dogstatsd_to_api ingress throughput +0.02 [-0.19, +0.22]
tcp_dd_logs_filter_exclude ingress throughput +0.01 [-0.03, +0.06]
trace_agent_msgpack ingress throughput +0.00 [-0.01, +0.01]
trace_agent_json ingress throughput -0.00 [-0.02, +0.02]
file_tree memory utilization -0.09 [-0.23, +0.04]
otel_to_otel_logs ingress throughput -0.15 [-0.54, +0.23]
basic_py_check % cpu utilization -0.98 [-3.46, +1.50]
uds_dogstatsd_to_api_cpu % cpu utilization -2.07 [-4.91, +0.77]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

pr-commenter[bot] avatar May 28 '24 18:05 pr-commenter[bot]