datadog-agent
datadog-agent copied to clipboard
Introduce generic usage of GOMEMLIMIT in all Agents with logic suited for pure Go and Go/Python Agents.
What does this PR do?
Introduce a memory limiter that is selected based on build tags and:
- For pure Go Agents, it uses a static limiter, setting
GOMEMLIMIT
once based on a percentage set withgo_memlimit_pct
, default to95%
of cgroup limit. - Fore Go+Python Agent, it uses a dynamic limiter setting
GOMEMLIMIT
continuously (everygo_dynamic_memlimit_interval_seconds
) based on Python consumption.go_memlimit_pct
is not applied in this case. This is only activated if Python telemetry is activated. It does NOT fallback to static limiter as it's not suited to mixed workload.
Motivation
Improve Agent operability.
Additional Notes
Replaced existing mechanism for trace-agent
and system-probe
. It will behave the same except that the percentage has been increased to 98% as 10% of "lost" memory seem a bit high, we can review the default value if it does not work properly internally.
Possible Drawbacks / Trade-offs
Currently I put a hook in pkg/collector/python/init.go
to set a variable with Python memory usage. It's probably not the cleanest way. I've seen we already have a (more expensive?) GetPythonInterpreterMemoryUsage
in pkg/collector/python/helpers.go
.
Waiting on @DataDog/agent-shared-components feedback on the best way to get this information.
Describe how to test/QA your changes
No QA card created for that as we're going to test that internally.
Reviewer's Checklist
- [x] If known, an appropriate milestone has been selected; otherwise the
Triage
milestone is set. - [x] Use the
major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote. - [x] A release note has been added or the
changelog/no-changelog
label has been applied. - [x] Changed code has automated tests for its functionality.
- [x] Adequate QA/testing plan information is provided if the
qa/skip-qa
label is not applied. - [x] At least one
team/..
label has been applied, indicating the team(s) that should QA this change. - [x] If applicable, docs team has been notified or an issue has been opened on the documentation repo.
- [x] If applicable, the
need-change/operator
andneed-change/helm
labels have been applied. - [x] If applicable, the
k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature. - [x] If applicable, the config template has been updated.
98% won't work for system-probe
because a significant chunk of RSS is not memory managed by Go, but mmap
-ed memory for interacting with eBPF.
98% won't work for
system-probe
because a significant chunk of RSS is not memory managed by Go, butmmap
-ed memory for interacting with eBPF.
@brycekahle It's currently running with 90% from #16190, however we can also use the dynamic
limiter, although I think the eBPF map is counted in kernel
memory, which is not part of RSS but is counted in the working_set
which, IIRC, is ~what is used by the oom killer.
90% worked best for us given the fact that in most cases the trace agent gets OOM killed because of traffic spikes. I'm afraid 98% is to close to the hard limit. Can we revert to 90%?
@ahmed-mez Interesting. On our large deployments, sparing 10% of >1GB sounds significant but if you already performed some tests, 90%
is probably a value we can go with, knowing that the GC can go above it if necessary and, in any case, we can adjust that per container later if we need to.
I think the eBPF map is counted in
kernel
memory
There are also perf buffers and ring buffers, which are mmap
-ed. mmap
is specifically called out in the Go documentation as not being included.
It's currently running with 90% from #16190
That was just merged yesterday, but @paulcacheux and I have already decided that 90% is insufficient and we need a more dynamic approach to system-probe.
It's currently running with 90% from #16190
That was just merged yesterday, but @paulcacheux and I have already decided that 90% is insufficient and we need a more dynamic approach to system-probe.
That's similar to Agent with Python and I think we can work to fit it with the dynamicMemoryLimiter
, WDYT?
Interesting. On our large deployments, sparing 10% of >1GB sounds significant but if you already performed some tests, 90% is probably a value we can go with, knowing that the GC can go above it if necessary and, in any case, we can adjust that per container later if we need to.
Yes I completely understand the rationale behind experimenting with values higher than 90%, also open to testing and adapting the values!
And to be clear, I'm just suggesting to keep 90% for the trace-agent only. We can try higher values internally, but leave 90% the default in the trace agent config (we can have it under apm_config
in the datadog.yaml if it makes sense)
:warning::rotating_light: Warning, this pull request increases the binary size of serverless extension by 32 bytes. Each MB of binary size increase means about 10ms of additional cold start time, so this pull request would increase cold start time by 0ms.
New dependencies added
We suggest you consider adding the !serverless
build tag to remove any new dependencies not needed in the serverless extension.
If you have questions, we are happy to help, come visit us in the #serverless slack channel and provide a link to this comment.
:warning::rotating_light: Warning, this pull request increases the binary size of serverless extension by 32 bytes. Each MB of binary size increase means about 10ms of additional cold start time, so this pull request would increase cold start time by 0ms.
New dependencies added
We suggest you consider adding the !serverless
build tag to remove any new dependencies not needed in the serverless extension.
If you have questions, we are happy to help, come visit us in the #serverless slack channel and provide a link to this comment.
Interesting, I'm going to warn the Serverless team about the above message, it shouldn't have alerted 🤔
@brycekahle I've added the ability to compute the external value based on self cgroup stat, hope it works for you, you can customize the function of course, my implementation is just an example.
@ahmed-mez I've set parameters per Agent and set trace-agent
default to 90%
:warning::rotating_light: Warning, this pull request increases the binary size of serverless extension by 32 bytes. Each MB of binary size increase means about 10ms of additional cold start time, so this pull request would increase cold start time by 0ms.
New dependencies added
We suggest you consider adding the !serverless
build tag to remove any new dependencies not needed in the serverless extension.
If you have questions, we are happy to help, come visit us in the #serverless slack channel and provide a link to this comment.
:warning::rotating_light: Warning, this pull request increases the binary size of serverless extension by 32 bytes. Each MB of binary size increase means about 10ms of additional cold start time, so this pull request would increase cold start time by 0ms.
New dependencies added
We suggest you consider adding the !serverless
build tag to remove any new dependencies not needed in the serverless extension.
If you have questions, we are happy to help, come visit us in the #serverless slack channel and provide a link to this comment.
Go Package Import Differences
Baseline: 9a7a7660b47d3be34c7f46585c099dcfa95bde2a Comparison: bc9cb55415f909217c227a551e3dfda22a0c9054
binary | os | arch | change |
---|---|---|---|
process-agent | linux | amd64 | +4, -0
+github.com/DataDog/datadog-agent/pkg/runtime
+go.uber.org/automaxprocs/internal/cgroups
+go.uber.org/automaxprocs/internal/runtime
+go.uber.org/automaxprocs/maxprocs
|
process-agent | linux | arm64 | +4, -0
+github.com/DataDog/datadog-agent/pkg/runtime
+go.uber.org/automaxprocs/internal/cgroups
+go.uber.org/automaxprocs/internal/runtime
+go.uber.org/automaxprocs/maxprocs
|
process-agent | windows | amd64 | +3, -0
+github.com/DataDog/datadog-agent/pkg/runtime
+go.uber.org/automaxprocs/internal/runtime
+go.uber.org/automaxprocs/maxprocs
|
process-agent | darwin | amd64 | +3, -0
+github.com/DataDog/datadog-agent/pkg/runtime
+go.uber.org/automaxprocs/internal/runtime
+go.uber.org/automaxprocs/maxprocs
|
process-agent | darwin | arm64 | +3, -0
+github.com/DataDog/datadog-agent/pkg/runtime
+go.uber.org/automaxprocs/internal/runtime
+go.uber.org/automaxprocs/maxprocs
|
heroku-process-agent | linux | amd64 | +4, -0
+github.com/DataDog/datadog-agent/pkg/runtime
+go.uber.org/automaxprocs/internal/cgroups
+go.uber.org/automaxprocs/internal/runtime
+go.uber.org/automaxprocs/maxprocs
|
Bloop Bleep... Dogbot Here
Regression Detector Results
Run ID: 8c6e2801-361a-46bf-b462-9bac56d4a267 Baseline: 9a7a7660b47d3be34c7f46585c099dcfa95bde2a Comparison: bc9cb55415f909217c227a551e3dfda22a0c9054 Total CPUs: 7
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Experiments ignored for regressions
Regressions in experiments with settings containing erratic: true
are ignored.
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | file_to_blackhole | % cpu utilization | +0.57 | [-6.04, +7.19] |
➖ | file_tree | memory utilization | +0.41 | [+0.31, +0.51] |
➖ | idle | memory utilization | +0.16 | [+0.13, +0.20] |
Fine details of change detection per experiment
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | tcp_syslog_to_blackhole | ingress throughput | +1.48 | [+1.41, +1.55] |
➖ | process_agent_standard_check_with_stats | memory utilization | +0.58 | [+0.53, +0.62] |
➖ | file_to_blackhole | % cpu utilization | +0.57 | [-6.04, +7.19] |
➖ | process_agent_real_time_mode | memory utilization | +0.51 | [+0.46, +0.55] |
➖ | file_tree | memory utilization | +0.41 | [+0.31, +0.51] |
➖ | otel_to_otel_logs | ingress throughput | +0.29 | [-0.45, +1.03] |
➖ | idle | memory utilization | +0.16 | [+0.13, +0.20] |
➖ | trace_agent_msgpack | ingress throughput | +0.10 | [+0.08, +0.12] |
➖ | trace_agent_json | ingress throughput | +0.07 | [+0.04, +0.09] |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.04, +0.04] |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.06, +0.06] |
➖ | process_agent_standard_check | memory utilization | -0.94 | [-1.00, -0.89] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".