Use the new AgentMajorVersion parameter in the e2e tests
What does this PR do?
Motivation
Describe how to test/QA your changes
Possible Drawbacks / Trade-offs
Additional Notes
Relates to https://github.com/DataDog/test-infra-definitions/pull/1192
https://datadoghq.atlassian.net/browse/ADXT-680
[Fast Unit Tests Report]
On pipeline 47205431 (CI Visibility). The following jobs did not run any unit tests:
Jobs:
- tests_windows-x64
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help
Regression Detector
Regression Detector Results
Run ID: a070f905-23c9-4996-8f09-4d686a56c816 Metrics dashboard Target profiles
Baseline: 1b238ece7b2d59a6631c303d31f3a025d3895d24 Comparison: 78adf6a7eacc2b068b4b41a5feb8facfdd01a1e2
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | basic_py_check | % cpu utilization | +1.14 | [-1.59, +3.87] | 1 | Logs |
| ➖ | otel_to_otel_logs | ingress throughput | +0.58 | [-0.23, +1.39] | 1 | Logs |
| ➖ | idle_all_features | memory utilization | +0.53 | [+0.43, +0.63] | 1 | Logs bounds checks dashboard |
| ➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.49 | [-0.22, +1.21] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.45 | [-0.04, +0.94] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | +0.43 | [+0.32, +0.54] | 1 | Logs bounds checks dashboard |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.09 | [+0.05, +0.13] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.03 | [-0.01, +0.07] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.03 | [-0.22, +0.28] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.07, +0.08] | 1 | Logs |
| ➖ | file_to_blackhole_300ms_latency | egress throughput | +0.00 | [-0.18, +0.18] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.00 | [-0.33, +0.33] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.01 | [-0.24, +0.21] | 1 | Logs |
| ➖ | file_tree | memory utilization | -0.04 | [-0.17, +0.08] | 1 | Logs |
| ➖ | idle | memory utilization | -0.26 | [-0.31, -0.21] | 1 | Logs bounds checks dashboard |
| ➖ | pycheck_lots_of_tags | % cpu utilization | -0.50 | [-3.01, +2.02] | 1 | Logs |
Bounds Checks
| perf | experiment | bounds_check_name | replicates_passed |
|---|---|---|---|
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 |
| ✅ | idle | memory_usage | 10/10 |
| ✅ | idle_all_features | memory_usage | 10/10 |
| ✅ | quality_gate_idle | memory_usage | 10/10 |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Test changes on VM
Use this command from test-infra-definitions to manually test this PR changes on a VM:
inv create-vm --pipeline-id=47205431 --os-family=ubuntu
Note: This applies to commit 78adf6a7
/merge
:steam_locomotive: MergeQueue: pull request added to the queue
The median merge time in main is 22m.
Use /merge -c to cancel this operation!