datadog-agent
datadog-agent copied to clipboard
[protobuf] further refactor of pb definitions into pkg/proto
What does this PR do?
This PR continues the consolidation of protobuf definitions into the pkg/proto
package. We were missing a few elements from the trace team, and some other miscellaneous changes (including some small tooling fixes).
Motivation
We want to make sure working with pkg/proto
is consistent across the repository, and this PR hopes to cement the current protobuf workflow by continuing to move relevant bits and pieces into said pkg/proto
sub-package. There are no functional aspirations in this PR beyond the developer experience and should add or remove no features.
Additional Notes
It's important to be thorough in the review of the protobuf changes given that we have moved quite a few protobuf messages around and that this PR bit-rotted for a while about 6 weeks before being re-worked on. Compilation success is a good indicator that things are fine, but I would really appreciate all code-owners signing off on this one.
The sooner we can get this to land, the better as these efforts can enter code-conflict situations rapidly.
Possible Drawbacks / Trade-offs
No trade-offs are expected.
Describe how to test/QA your changes
There is no easy way to QA a relatively large refactor, but we should focus on evaluating the correctness and validity of all protobuf-based communications, be it grpc, or any serialized form of communication that may rely on this.
Reviewer's Checklist
- [x] If known, an appropriate milestone has been selected; otherwise the
Triage
milestone is set. - [ ] The appropriate
team/..
label has been applied, if known. - [ ] Use the
major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote. - [ ] A release note has been added or the
changelog/no-changelog
label has been applied. - [ ] Changed code has automated tests for its functionality.
- [ ] Adequate QA/testing plan information is provided if the
qa/skip-qa
label is not applied. - [ ] If applicable, docs team has been notified or an issue has been opened on the documentation repo.
- [ ] If applicable, the
need-change/operator
andneed-change/helm
labels have been applied. - [ ] If applicable, the config template has been updated.
What the reason behind the protodep
package, given the agent-payload
repo supports go modules now?
What the reason behind the
protodep
package, given theagent-payload
repo supports go modules now?
We don't just have agent-payload
as a protobuf dependency, we have multiple other definitions we rely on for grpc, and others. Protodep is a nice tool to make sure those are tracked nicely. I'm not sure if I'm missing something from your question. Also, this PR does not introduce protodep, we have been using it for a while.
Also, this PR does not introduce protodep, we have been using it for a while.
This PR does add the agent-payload
to protodep. Since it wasn't necessary before, I'm curious what value it is providing?
I'm not sure if I'm missing something from your question
I think the dep
part of protodep
was confusing me, especially since it uses the same TOML file format.
I was hoping for some kind of README or other documentation to guide me as to what I'm looking at. I'm also happy to just click "Approve" on the basis that @truthbk knows what he's doing :)
Bloop Bleep... Dogbot Here
Regression Detector Results
Run ID: b17577c4-b661-4888-830b-104bcfcc91e6
Baseline: e696701aed1c442f56b29c312e0c74fa1a23aa49
Comparison: a2751dadfe2b304bb247f691fb3b87c3ab0186fc
Total datadog-agent
CPUs: 7
Explanation
A regression test is an integrated performance test for datadog-agent
in a repeatable rig, with varying configuration for datadog-agent
. What follows is a statistical summary of a brief datadog-agent
run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent
performance is changed and to what degree by a pull request.
Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.
We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
-
The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.
-
Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.
No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.
Fine details of change detection per experiment.
experiment | goal | Δ mean % | Δ mean % CI | confidence |
---|---|---|---|---|
tcp_syslog_to_blackhole | ingress throughput | +0.99 | [+0.93, +1.05] | 100.00% |
uds_dogstatsd_to_api | ingress throughput | +0.89 | [+0.30, +1.48] | 94.80% |
file_to_blackhole | egress throughput | +0.48 | [-0.61, +1.56] | 42.68% |
tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.06, +0.06] | 0.69% |
otel_to_otel_logs | ingress throughput | -0.00 | [-0.07, +0.06] | 7.55% |
trace_agent_json | ingress throughput | -0.01 | [-0.04, +0.02] | 37.86% |
trace_agent_msgpack | ingress throughput | -0.07 | [-0.10, -0.04] | 99.85% |
Serverless failures in github actions unrelated to this PR.
@gbbr Would you be able to take a look at this?
We're going to do the performance testing, etc., but this moves some code out of pkg/trace
into other modules. Not sure if there needs to be otel changes for this.
After 9 approvals, and at the rate this is bit-rotting, I am going to merge with privileges. Failures on tests at the time of this merge can also be seen on main
.