datadog-agent icon indicating copy to clipboard operation
datadog-agent copied to clipboard

Make Go check loader have priority over Python

Open pgimalac opened this issue 6 months ago • 5 comments

What does this PR do?

Make the Go check loader have priority over the Python check loader.

Motivation

This will eventually allow not loading Python by default. For now, even loading a Go check requires Python, because we need to check if there is a Python version for that check first.

Describe how you validated your changes

We know that 5 checks have both a Go and a Python version:

  • network
  • disk
  • kubelet
  • snmp
  • win32_event_log

Ensure that the check(s) mentioned above that you own still behave in the same way, in particular the same version (Go/Python) runs, for each flavor / platform where it makes sense.

I tested the network and disk checks for Agent Runtimes, with the default flavor of the agent, with the iot agent, and on k8s. In each case we were still running the same expected version (Go for IoT, Python otherwise).

Possible Drawbacks / Trade-offs

Additional Notes

The SNMP check had a special handling to try loading with Go first then Python, so the behavior is the same. Each of the other checks has a configuration to pick whether to run the Go or the Python version, so there should be no impact on that side (it doesn't depend on loader priority).

We also checked whether users were using custom Python checks with the same name as a Go check, using a telemetry added in Agent 7.64.0 (PR), and didn't find any such case.

pgimalac avatar Jun 19 '25 12:06 pgimalac

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 2c3963cf-1838-4587-922a-e665cde4fa70

Baseline: 56555ea78b9959d68abe1ca156902b2281d936be Comparison: 34ddc04de65119c455aeb85271a6d5af9f674f4e Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
file_tree memory utilization +2.64 [+2.41, +2.86] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +1.29 [+0.41, +2.17] 1 Logs
quality_gate_logs % cpu utilization +1.23 [-1.53, +3.98] 1 Logs bounds checks dashboard
quality_gate_idle_all_features memory utilization +0.47 [+0.35, +0.60] 1 Logs bounds checks dashboard
docker_containers_memory memory utilization +0.40 [+0.33, +0.47] 1 Logs
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization +0.19 [+0.15, +0.24] 1 Logs
quality_gate_idle memory utilization +0.09 [+0.01, +0.16] 1 Logs bounds checks dashboard
file_to_blackhole_0ms_latency_http1 egress throughput +0.05 [-0.55, +0.66] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.05 [-0.51, +0.62] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput +0.05 [-0.50, +0.60] 1 Logs
otlp_ingest_metrics memory utilization +0.04 [-0.13, +0.21] 1 Logs
tcp_syslog_to_blackhole ingress throughput +0.04 [-0.01, +0.09] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput +0.03 [-0.21, +0.26] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.01 [-0.58, +0.61] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.01 [-0.01, +0.03] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.01 [-0.25, +0.26] 1 Logs
ddot_logs memory utilization +0.00 [-0.09, +0.10] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.01 [-0.62, +0.60] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.04 [-0.61, +0.52] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.06 [-0.69, +0.58] 1 Logs
docker_containers_cpu % cpu utilization -0.07 [-3.07, +2.93] 1 Logs
ddot_metrics memory utilization -0.21 [-0.33, -0.10] 1 Logs
otlp_ingest_logs memory utilization -0.35 [-0.47, -0.22] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
docker_containers_cpu simple_check_run 10/10
docker_containers_memory memory_usage 10/10
docker_containers_memory simple_check_run 10/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_logs memory_usage 10/10 bounds checks dashboard

Explanation

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.

cit-pr-commenter[bot] avatar Jun 19 '25 13:06 cit-pr-commenter[bot]

Static quality checks

✅ Please find below the results from static quality gates Comparison made with ancestor 56555ea78b9959d68abe1ca156902b2281d936be

Successful checks

Info

Quality gate Delta On disk size (MiB) Delta On wire size (MiB)
agent_deb_amd64 $${+0}$$ $${697.29}$$ < $${697.37}$$ $${+0}$$ $${176.07}$$ < $${177.03}$$
agent_deb_amd64_fips $${+0}$$ $${695.55}$$ < $${695.59}$$ $${-0.03}$$ $${175.57}$$ < $${176.51}$$
agent_heroku_amd64 $${+0}$$ $${349.05}$$ < $${359.67}$$ $${+0}$$ $${93.65}$$ < $${97.47}$$
agent_msi $${-0}$$ $${959.07}$$ < $${959.86}$$ $${-0.02}$$ $${146.22}$$ < $${147.27}$$
agent_rpm_amd64 $${+0}$$ $${697.28}$$ < $${697.36}$$ $${-0.04}$$ $${177.68}$$ < $${178.56}$$
agent_rpm_amd64_fips $${+0}$$ $${695.54}$$ < $${695.58}$$ $${-0.02}$$ $${177.57}$$ < $${178.43}$$
agent_rpm_arm64 $${+0}$$ $${687.3}$$ < $${687.37}$$ $${+0.04}$$ $${161.2}$$ < $${161.99}$$
agent_rpm_arm64_fips $${+0}$$ $${685.68}$$ < $${685.72}$$ $${-0.03}$$ $${160.26}$$ < $${161.11}$$
agent_suse_amd64 $${+0}$$ $${697.28}$$ < $${697.36}$$ $${-0.04}$$ $${177.68}$$ < $${178.56}$$
agent_suse_amd64_fips $${+0}$$ $${695.54}$$ < $${695.58}$$ $${-0.02}$$ $${177.57}$$ < $${178.43}$$
agent_suse_arm64 $${+0}$$ $${687.3}$$ < $${687.37}$$ $${+0.04}$$ $${161.2}$$ < $${161.99}$$
agent_suse_arm64_fips $${+0}$$ $${685.68}$$ < $${685.72}$$ $${-0.03}$$ $${160.26}$$ < $${161.11}$$
docker_agent_amd64 $${+0}$$ $${781.09}$$ < $${781.16}$$ $${-0}$$ $${268.83}$$ < $${269.63}$$
docker_agent_arm64 $${+0}$$ $${794.56}$$ < $${794.62}$$ $${+0.01}$$ $${256.19}$$ < $${257.0}$$
docker_agent_jmx_amd64 $${+0}$$ $${972.29}$$ < $${972.35}$$ $${-0}$$ $${337.8}$$ < $${338.6}$$
docker_agent_jmx_arm64 $${+0}$$ $${974.35}$$ < $${974.41}$$ $${-0.01}$$ $${321.13}$$ < $${321.97}$$
docker_agent_windows1809 $${-0.19}$$ $${1180.42}$$ < $${1185.29}$$ $${-0.11}$$ $${416.07}$$ < $${420.95}$$
docker_agent_windows1809_core $${+0}$$ $${5910.57}$$ < $${5915.25}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows1809_core_jmx $${+22.32}$$ $${6054.47}$$ < $${6059.4}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows1809_jmx $${-0}$$ $${1302.24}$$ < $${1306.92}$$ $${-0.04}$$ $${458.4}$$ < $${463.21}$$
docker_agent_windows2022 $${-0.39}$$ $${1199.65}$$ < $${1204.42}$$ $${+0.03}$$ $${428.89}$$ < $${433.71}$$
docker_agent_windows2022_core $${-0.01}$$ $${5883.8}$$ < $${5888.56}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows2022_core_jmx $${-0.17}$$ $${6005.28}$$ < $${6009.95}$$ $${0}$$ $${2048.0}$$ < $${2049.0}$$
docker_agent_windows2022_jmx $${-0.23}$$ $${1321.43}$$ < $${1326.13}$$ $${-0.02}$$ $${471.16}$$ < $${475.99}$$
docker_cluster_agent_amd64 $${+0}$$ $${212.9}$$ < $${213.79}$$ $${-0}$$ $${72.41}$$ < $${73.33}$$
docker_cluster_agent_arm64 $${+0}$$ $${228.76}$$ < $${229.64}$$ $${+0}$$ $${68.68}$$ < $${69.6}$$
docker_cws_instrumentation_amd64 $${+0}$$ $${7.08}$$ < $${7.12}$$ $${+0}$$ $${2.95}$$ < $${3.29}$$
docker_cws_instrumentation_arm64 $${+0}$$ $${6.69}$$ < $${6.92}$$ $${-0}$$ $${2.7}$$ < $${3.07}$$
docker_dogstatsd_amd64 $${+0}$$ $${39.23}$$ < $${39.57}$$ $${+0}$$ $${15.12}$$ < $${15.76}$$
docker_dogstatsd_arm64 $${+0}$$ $${37.88}$$ < $${38.2}$$ $${+0}$$ $${14.54}$$ < $${14.83}$$
dogstatsd_deb_amd64 $${+0}$$ $${30.46}$$ < $${31.4}$$ $${-0}$$ $${8.0}$$ < $${8.95}$$
dogstatsd_deb_arm64 $${+0}$$ $${29.03}$$ < $${29.97}$$ $${-0}$$ $${6.94}$$ < $${7.89}$$
dogstatsd_rpm_amd64 $${+0}$$ $${30.46}$$ < $${31.4}$$ $${+0}$$ $${8.01}$$ < $${8.96}$$
dogstatsd_suse_amd64 $${+0}$$ $${30.46}$$ < $${31.4}$$ $${+0}$$ $${8.01}$$ < $${8.96}$$
iot_agent_deb_amd64 $${+0}$$ $${50.48}$$ < $${51.38}$$ $${-0}$$ $${12.85}$$ < $${13.79}$$
iot_agent_deb_arm64 $${+0}$$ $${47.95}$$ < $${48.85}$$ $${+0}$$ $${11.15}$$ < $${12.09}$$
iot_agent_deb_armhf $${+0}$$ $${47.52}$$ < $${48.42}$$ $${-0}$$ $${11.21}$$ < $${12.16}$$
iot_agent_rpm_amd64 $${+0}$$ $${50.48}$$ < $${51.38}$$ $${-0}$$ $${12.87}$$ < $${13.81}$$
iot_agent_rpm_arm64 $${+0}$$ $${47.95}$$ < $${48.85}$$ $${-0}$$ $${11.17}$$ < $${12.11}$$
iot_agent_suse_amd64 $${+0}$$ $${50.48}$$ < $${51.38}$$ $${-0}$$ $${12.87}$$ < $${13.81}$$

Is there a specific reason for the Python check loader to be prioritized by the scheduler? Why do we need to check if a Python version exists for every check?

Enzu83 avatar Jun 25 '25 15:06 Enzu83

@Enzu83 here is an RFC explaining why we're doing for this change.

Is there a specific reason for the Python check loader to be prioritized by the scheduler?

I believe initially agent 6 only had Python checks (inherited from Agent 5), so there was only a Python loader. Then we added an IoT agent, which doesn't ship with Python, so we needed a basic Go version of some checks to provide system metrics. If Python is available we should use it as it contains a better version of the checks, otherwise we fallback to the Go version. Also doing it the other way could have broken users: if a user has a custom Python check named the same way as one of the Go checks, then the Go version would have shadowed it (note that I was able to check that there is no such case).

Why do we need to check if a Python version exists for every check?

That's just defined by the loader priority, as long as Python checks have priority over Go checks (which is the case until this PR) we need to try loading with Python then with Go. For Go the list of checks is known and straightforward, but for Python we can't know easily if a check has a Python implementation. It requires importing the Python module with the same name as the check (eg. snmp or datadog_checks.snmp), and searching that module to find a class which extends the base check class (+ a few other minor conditions). I tried some alternative way to know if a check has a Python version without needing to load Python (eg. https://github.com/DataDog/datadog-agent/pull/32922), but it's a difficult problem to do correctly (lots of caveats).

Anyway with this PR, we make Go checks have priority over Python, which means that if a check has a Go implementation we don't need to try loading it with Python, we can just stop there. This will enable not loading Python at all by default, once all the default Python checks are migrated to Go.

pgimalac avatar Jun 25 '25 15:06 pgimalac

@pgimalac Thanks for the links and the context! Everything is clear for me now.

Enzu83 avatar Jun 26 '25 08:06 Enzu83

/merge

pgimalac avatar Jun 30 '25 08:06 pgimalac

View all feedbacks in Devflow UI.

2025-06-30 08:12:08 UTC :information_source: Start processing command /merge


2025-06-30 08:12:45 UTC :information_source: MergeQueue: pull request added to the queue

The expected merge time in main is approximately 60m (p90).


2025-06-30 08:54:37 UTC :information_source: MergeQueue: This merge request was merged

dd-devflow[bot] avatar Jun 30 '25 08:06 dd-devflow[bot]