datadog-agent
datadog-agent copied to clipboard
Allow developers to target "alien" VMs with KMT build tasks
What does this PR do?
This PR allows developers to use the KMT build tasks to target VMs launch outside the purview of KMT. These can be local VMs launched with Parallels, VMWare, etc, or remote VMs launched in EC2 or GCP, etc.
Motivation
The KMT build tasks allow a user to build and test system-probe exactly like the CI, locally. This new feature now allows developers to share the build/test packages with any VM even if it is not launch with KMT.
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
[Fast Unit Tests Report]
On pipeline 44294754 (CI Visibility). The following jobs did not run any unit tests:
Jobs:
- tests_deb-arm64-py3
- tests_deb-x64-py3
- tests_flavor_dogstatsd_deb-x64
- tests_flavor_heroku_deb-x64
- tests_flavor_iot_deb-x64
- tests_rpm-arm64-py3
- tests_rpm-x64-py3
- tests_windows-x64
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help
Regression Detector
Regression Detector Results
Run ID: 0c963193-d5af-4645-ba38-b1cdfa0a2540 Metrics dashboard Target profiles
Baseline: 3528ce7782207f4610c6d8e857a418769a3f064a Comparison: fe5f04de1a9ebf0c6c61dcee4664e61db17fdd0c
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | idle | memory utilization | +1.23 | [+1.19, +1.27] | 1 | Logs |
| ➖ | basic_py_check | % cpu utilization | +0.87 | [-1.86, +3.61] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.59 | [+0.47, +0.71] | 1 | Logs |
| ➖ | pycheck_lots_of_tags | % cpu utilization | +0.44 | [-2.24, +3.13] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.00, +0.00] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.10 | [-0.87, +0.68] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.87 | [-0.93, -0.82] | 1 | Logs |
| ➖ | otel_to_otel_logs | ingress throughput | -1.10 | [-1.92, -0.27] | 1 | Logs |
Bounds Checks
| perf | experiment | bounds_check_name | replicates_passed |
|---|---|---|---|
| ❌ | idle | memory_usage | 8/10 |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
/merge
:steam_locomotive: MergeQueue: pull request added to the queue
The median merge time in main is 23m.
Use /merge -c to cancel this operation!