datadog-agent
datadog-agent copied to clipboard
Add E2E tests for Windows Installer agent config related options
What does this PR do?
Add E2E tests for Windows Installer agent config related options (LOGS_ENABLED, SITE, etc.)
https://docs.datadoghq.com/agent/basic_agent_usage/windows/?tab=commandline#installation-configuration-options
removes the equivalent kitchen tests
Motivation
https://datadoghq.atlassian.net/browse/WINA-507
Additional Notes
marked draft since it's stacked on a dependent PR
Possible Drawbacks / Trade-offs
Parts of these tests might be better off in their own team's E2E tests, for example:
- check that process-agent does/doesn't start when providing the options to the installer
- check that
cmd_portis bound correctly when set in config
But E2E doesn't yet support passing MSI options to the Agent installer through Pulumi so we will just maintain the kitchen test functionality for now so that we don't lose coverage.
TestSubServicesOpts runs two tabular tests on the same E2E host/stack. Running each tabular tests on separate stacks is pending https://github.com/DataDog/datadog-agent/pull/23348
Describe how to test/QA your changes
Bloop Bleep... Dogbot Here
Regression Detector Results
Run ID: d5e8f73c-552b-41fc-b268-fd2198613701 Baseline: c221f839802de34c89f5b6454a7e2ac675863ea5 Comparison: 1d4c47aa0cab529c93d906a0d8678c4e2638acd9
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Experiments ignored for regressions
Regressions in experiments with settings containing erratic: true are ignored.
| perf | experiment | goal | Δ mean % | Δ mean % CI |
|---|---|---|---|---|
| ➖ | file_to_blackhole | % cpu utilization | +0.24 | [-6.31, +6.78] |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI |
|---|---|---|---|---|
| ➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +2.32 | [+0.88, +3.75] |
| ➖ | basic_py_check | % cpu utilization | +1.46 | [-0.74, +3.66] |
| ➖ | file_tree | memory utilization | +0.89 | [+0.78, +1.00] |
| ➖ | file_to_blackhole | % cpu utilization | +0.24 | [-6.31, +6.78] |
| ➖ | process_agent_standard_check_with_stats | memory utilization | +0.12 | [+0.07, +0.17] |
| ➖ | trace_agent_msgpack | ingress throughput | +0.01 | [+0.00, +0.02] |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.00, +0.00] |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.00, +0.00] |
| ➖ | trace_agent_json | ingress throughput | -0.04 | [-0.07, -0.01] |
| ➖ | process_agent_real_time_mode | memory utilization | -0.05 | [-0.09, -0.00] |
| ➖ | process_agent_standard_check | memory utilization | -0.05 | [-0.10, -0.01] |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.12 | [-0.17, -0.06] |
| ➖ | idle | memory utilization | -0.72 | [-0.76, -0.68] |
| ➖ | otel_to_otel_logs | ingress throughput | -1.03 | [-1.68, -0.38] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
[Fast Unit Tests Report]
On pipeline 31296517 (CI Visibility). The following jobs did not run any unit tests:
Jobs:
- tests_deb-arm64-py3
- tests_deb-x64-py3
- tests_flavor_dogstatsd_deb-x64
- tests_flavor_heroku_deb-x64
- tests_flavor_iot_deb-x64
- tests_rpm-arm64-py3
- tests_rpm-x64-py3
- tests_windows-x64
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-developer-experience
Test changes on VM
Use this command from test-infra-definitions to manually test this PR changes on a VM:
inv create-vm --pipeline-id=31296517 --os-family=ubuntu
Regression Detector
Regression Detector Results
Run ID: ad9c07a8-f760-4ed5-8329-03f370b2b7e1 Baseline: 6ec7cd3c5e26b7c523b55fa93bb494328f405017 Comparison: d3ef4565ad5203c4083afbf31001fb99f1fcf4b8
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Experiments ignored for regressions
Regressions in experiments with settings containing erratic: true are ignored.
| perf | experiment | goal | Δ mean % | Δ mean % CI |
|---|---|---|---|---|
| ➖ | file_to_blackhole | % cpu utilization | -0.67 | [-6.51, +5.18] |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI |
|---|---|---|---|---|
| ➖ | otel_to_otel_logs | ingress throughput | +0.08 | [-0.35, +0.51] |
| ➖ | trace_agent_json | ingress throughput | +0.00 | [-0.02, +0.03] |
| ➖ | trace_agent_msgpack | ingress throughput | +0.00 | [-0.00, +0.00] |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.21, +0.20] |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.01 | [-0.04, +0.02] |
| ➖ | file_tree | memory utilization | -0.36 | [-0.45, -0.27] |
| ➖ | pycheck_1000_100byte_tags | % cpu utilization | -0.57 | [-5.46, +4.33] |
| ➖ | process_agent_standard_check_with_stats | memory utilization | -0.60 | [-0.64, -0.56] |
| ➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.67 | [-3.54, +2.21] |
| ➖ | file_to_blackhole | % cpu utilization | -0.67 | [-6.51, +5.18] |
| ➖ | idle | memory utilization | -0.88 | [-0.91, -0.84] |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.94 | [-1.03, -0.86] |
| ➖ | process_agent_standard_check | memory utilization | -1.08 | [-1.12, -1.03] |
| ➖ | process_agent_real_time_mode | memory utilization | -1.32 | [-1.35, -1.28] |
| ➖ | basic_py_check | % cpu utilization | -3.32 | [-5.93, -0.72] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
rebasing for CI fix https://github.com/DataDog/datadog-agent/pull/24113
/merge
:steam_locomotive: MergeQueue
Pull request added to the queue.
This build is next! (estimated merge in less than 27m)
Use /merge -c to cancel this operation!