datadog-agent
datadog-agent copied to clipboard
Fix linter errors from `inv lint-go` command with golangci-lint version `v1.55.2`
What does this PR do?
Fix linter errors from inv lint-go
command with golangci-lint version v1.55.2
.
linter output:
Linters for module /home/maxime/dev/go/src/github.com/DataDog/datadog-agent failed (base flavor)
Linter failures:
pkg/serverless/trigger/extractor.go:25:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return "aws"
}
pkg/network/proc_net.go:56:10: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
iter := &fieldIterator{data: b}
iter.nextField() // entry number
rawLocal = iter.nextField() // local_address
iter.nextField() // remote_address
rawState = iter.nextField() // st
state, err := strconv.ParseInt(string(rawState), 16, 0)
if err != nil {
log.Errorf("error parsing tcp state [%s] as hex: %s", rawState, err)
continue
}
if state != status {
continue
}
idx := bytes.IndexByte(rawLocal, ':')
if idx == -1 {
continue
}
port, err := strconv.ParseUint(string(rawLocal[idx+1:]), 16, 16)
if err != nil {
log.Errorf("error parsing port [%s] as hex: %s", rawLocal[idx+1:], err)
continue
}
ports = append(ports, uint16(port))
}
pkg/orchestrator/cache.go:69:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
incCacheHit(nodeType)
return true
}
pkg/serverless/appsec/config/config.go:52:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return enabled, set, nil
}
pkg/network/go/dwarfutils/locexpr/exec.go:105:10: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return offset
}
pkg/network/go/rungo/matrix/matrix.go:270:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return fmt.Sprintf("%d.%d", v.Major, v.Minor)
}
pkg/network/go/lutgen/run.go:308:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return fmt.Sprintf("%d.%d", v.Major, v.Minor)
}
pkg/collector/scheduler.go:209:11: superfluous-else: if block ends with a break statement, so drop this else and outdent its block (revive)
} else {
errorStats.setLoaderError(config.Name, fmt.Sprintf("%v", loader), err.Error())
errors = append(errors, fmt.Sprintf("%v: %s", loader, err))
}
cmd/serverless-init/main.go:48:9: superfluous-else: if block ends with call to panic function, so drop this else and outdent its block (revive)
} else {
cliParams := &cliParams{
args: os.Args[1:],
}
err := fxutil.OneShot(run, fx.Supply(cliParams))
if err != nil {
logger.Error(err)
}
}
cmd/agent/subcommands/integrations/command.go:308:10: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary) (revive)
} else {
return fmt.Errorf("cannot read local wheel %s: %v", args[0], err)
}
Linters for module /home/maxime/dev/go/src/github.com/DataDog/datadog-agent/pkg/tagset failed (base flavor)
Linter failures:
pkg/tagset/hash_generator.go:105:12: superfluous-else: if block ends with a break statement, so drop this else and outdent its block (revive)
} else {
// move 'right' in the hashset because there is already a value,
// in this bucket, which is not the one we're dealing with right now,
// we may have already seen this tag
j = (j + 1) & mask
}
Linters for module /home/maxime/dev/go/src/github.com/DataDog/datadog-agent/pkg/security/secl failed (base flavor)
Linter failures:
pkg/security/secl/compiler/generators/accessors/common/types.go:116:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
if sf.Iterator != nil || sf.IsArray {
return "[]string{}"
}
return `""`
}
pkg/security/secl/compiler/generators/accessors/common/types.go:132:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return `""`
}
pkg/security/secl/compiler/generators/accessors/accessors.go:826:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
if isArray {
return "[]string{}"
}
return `""`
}
pkg/security/secl/compiler/eval/utils.go:42:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
} else {
return nil, 0, fmt.Errorf("unsupported address length %d", len(ip))
}
Bloop Bleep... Dogbot Here
Regression Detector Results
Run ID: 3d1f0d92-4b51-439a-9d99-14de3525d1da Baseline: 08ffe66297a50132d1d00738b8bb241444ca00de Comparison: 1a6c78429d3209dde5f7e83e80fe686b5d136a2d Total CPUs: 7
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
Experiments with missing or malformed data
- basic_py_check
Usually, this warning means that there is no usable optimization goal data for that experiment, which could be a result of misconfiguration.
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Experiments ignored for regressions
Regressions in experiments with settings containing erratic: true
are ignored.
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | file_to_blackhole | % cpu utilization | -0.45 | [-7.01, +6.10] |
Fine details of change detection per experiment
perf | experiment | goal | Δ mean % | Δ mean % CI |
---|---|---|---|---|
➖ | otel_to_otel_logs | ingress throughput | +0.93 | [+0.32, +1.54] |
➖ | idle | memory utilization | +0.57 | [+0.53, +0.60] |
➖ | process_agent_standard_check_with_stats | memory utilization | +0.06 | [+0.03, +0.09] |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.02 | [-0.05, +0.09] |
➖ | trace_agent_json | ingress throughput | +0.01 | [-0.01, +0.03] |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.00, +0.00] |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.00, +0.00] |
➖ | trace_agent_msgpack | ingress throughput | -0.02 | [-0.02, -0.01] |
➖ | process_agent_standard_check | memory utilization | -0.07 | [-0.10, -0.03] |
➖ | file_tree | memory utilization | -0.22 | [-0.30, -0.14] |
➖ | file_to_blackhole | % cpu utilization | -0.45 | [-7.01, +6.10] |
➖ | process_agent_real_time_mode | memory utilization | -0.69 | [-0.73, -0.65] |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.82 | [-2.24, +0.60] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Serverless Benchmark Results
BenchmarkStartEndInvocation
comparison between c9f6fd00c09fb8d57a4e2fc8d4b7005e336b4f7a and ef90840c4015681a32d376d71c421fec73139a36.
tl;dr
-
Skim down the
vs base
column in each chart. If there is a~
, then there was no statistically significant change to the benchmark. Otherwise, ensure the estimated percent change is either negative or very small. -
The last row of each chart is the
geomean
. Ensure this percentage is either negative or very small.
What is this benchmarking?
The BenchmarkStartEndInvocation
compares the amount of time it takes to call the start-invocation
and end-invocation
endpoints. For universal instrumentation languages (Dotnet, Golang, Java, Ruby), this represents the majority of the duration overhead added by our tracing layer.
The benchmark is run using a large variety of lambda request payloads. In the charts below, there is one row for each event payload type.
How do I interpret these charts?
The charts below comes from benchstat
. They represent the statistical change in duration (sec/op), memory overhead (B/op), and allocations (allocs/op).
The benchstat docs explain how to interpret these charts.
Before the comparison table, we see common file-level configuration. If there are benchmarks with different configuration (for example, from different packages), benchstat will print separate tables for each configuration.
The table then compares the two input files for each benchmark. It shows the median and 95% confidence interval summaries for each benchmark before and after the change, and an A/B comparison under "vs base". ... The p-value measures how likely it is that any differences were due to random chance (i.e., noise). The "~" means benchstat did not detect a statistically significant difference between the two inputs. ...
Note that "statistically significant" is not the same as "large": with enough low-noise data, even very small changes can be distinguished from noise and considered statistically significant. It is, of course, generally easier to distinguish large changes from noise.
Finally, the last row of the table shows the geometric mean of each column, giving an overall picture of how the benchmarks changed. Proportional changes in the geomean reflect proportional changes in the benchmarks. For example, given n benchmarks, if sec/op for one of them increases by a factor of 2, then the sec/op geomean will increase by a factor of ⁿ√2.
Benchmark stats
goos: linux
goarch: amd64
pkg: github.com/DataDog/datadog-agent/pkg/serverless/daemon
cpu: AMD EPYC 7763 64-Core Processor
│ previous │ current │
│ sec/op │ sec/op vs base │
api-gateway-appsec.json 87.81µ ± 11% 88.88µ ± 3% ~ (p=0.529 n=10)
api-gateway-kong-appsec.json 64.97µ ± 3% 66.16µ ± 2% +1.83% (p=0.029 n=10)
api-gateway-kong.json 63.74µ ± 2% 65.07µ ± 1% +2.08% (p=0.002 n=10)
api-gateway-non-proxy-async.json 98.32µ ± 1% 98.95µ ± 1% ~ (p=0.280 n=10)
api-gateway-non-proxy.json 96.84µ ± 2% 97.83µ ± 3% +1.02% (p=0.035 n=10)
api-gateway-websocket-connect.json 66.75µ ± 1% 67.27µ ± 1% ~ (p=0.190 n=10)
api-gateway-websocket-default.json 58.89µ ± 2% 59.08µ ± 2% ~ (p=0.971 n=10)
api-gateway-websocket-disconnect.json 57.35µ ± 1% 57.74µ ± 2% ~ (p=0.218 n=10)
api-gateway.json 115.6µ ± 8% 106.3µ ± 2% -8.02% (p=0.004 n=10)
application-load-balancer.json 65.81µ ± 9% 58.05µ ± 2% -11.80% (p=0.000 n=10)
cloudfront.json 52.25µ ± 4% 44.13µ ± 8% -15.53% (p=0.000 n=10)
cloudwatch-events.json 40.24µ ± 7% 38.28µ ± 3% -4.87% (p=0.009 n=10)
cloudwatch-logs.json 55.59µ ± 21% 50.87µ ± 6% -8.49% (p=0.004 n=10)
custom.json 31.96µ ± 10% 30.20µ ± 1% -5.51% (p=0.000 n=10)
dynamodb.json 102.57µ ± 15% 86.00µ ± 12% -16.15% (p=0.001 n=10)
empty.json 35.90µ ± 10% 32.56µ ± 3% -9.29% (p=0.000 n=10)
eventbridge-custom.json 50.23µ ± 20% 44.29µ ± 7% -11.82% (p=0.029 n=10)
http-api.json 78.94µ ± 13% 79.02µ ± 7% ~ (p=0.971 n=10)
kinesis-batch.json 88.24µ ± 10% 90.78µ ± 56% ~ (p=0.516 n=10)
kinesis.json 70.49µ ± 9% 78.00µ ± 472% +10.64% (p=0.029 n=10)
s3.json 90.77µ ± 56% 308.96µ ± 74% +240.37% (p=0.009 n=10)
sns-batch.json 134.8µ ± 19% 276.9µ ± 148% +105.38% (p=0.001 n=9+10)
sns.json 270.8µ ± 131%
snssqs.json 686.3µ ± 136%
snssqs_no_dd_context.json 617.0µ ± 242%
sqs-aws-header.json 103.0µ ± ∞ ¹
geomean 68.55µ 91.36µ +5.55%
¹ need >= 6 samples for confidence interval at level 0.95
│ previous │ current │
│ B/op │ B/op vs base │
api-gateway-appsec.json 41.04Ki ± 3% 41.10Ki ± 3% ~ (p=0.971 n=10)
api-gateway-kong-appsec.json 28.06Ki ± 13% 28.06Ki ± 12% ~ (p=0.782 n=10)
api-gateway-kong.json 25.40Ki ± 0% 25.40Ki ± 0% ~ (p=0.341 n=10)
api-gateway-non-proxy-async.json 51.59Ki ± 0% 51.59Ki ± 0% ~ (p=0.683 n=10)
api-gateway-non-proxy.json 50.15Ki ± 0% 50.15Ki ± 0% ~ (p=0.780 n=10)
api-gateway-websocket-connect.json 27.04Ki ± 0% 27.04Ki ± 0% ~ (p=0.753 n=10)
api-gateway-websocket-default.json 22.31Ki ± 0% 22.31Ki ± 0% ~ (p=0.589 n=10)
api-gateway-websocket-disconnect.json 21.94Ki ± 0% 21.94Ki ± 0% ~ (p=0.207 n=10)
api-gateway.json 52.94Ki ± 0% 52.94Ki ± 0% ~ (p=0.469 n=10)
application-load-balancer.json 23.08Ki ± 0% 23.08Ki ± 0% ~ (p=0.267 n=10)
cloudfront.json 18.54Ki ± 0% 18.54Ki ± 0% ~ (p=0.124 n=10)
cloudwatch-events.json 11.57Ki ± 0% 11.57Ki ± 0% ~ (p=0.145 n=10)
cloudwatch-logs.json 53.11Ki ± 0% 53.11Ki ± 0% ~ (p=0.954 n=10)
custom.json 9.326Ki ± 0% 9.325Ki ± 0% ~ (p=0.601 n=10)
dynamodb.json 43.31Ki ± 0% 43.31Ki ± 0% ~ (p=0.171 n=10)
empty.json 8.820Ki ± 0% 8.818Ki ± 0% ~ (p=0.052 n=10)
eventbridge-custom.json 13.29Ki ± 0% 13.29Ki ± 0% ~ (p=0.050 n=10)
http-api.json 24.21Ki ± 0% 24.21Ki ± 0% ~ (p=0.125 n=10)
kinesis-batch.json 28.48Ki ± 0% 28.49Ki ± 0% ~ (p=0.224 n=10)
kinesis.json 18.24Ki ± 0% 18.24Ki ± 0% ~ (p=0.223 n=10)
s3.json 20.97Ki ± 0% 20.99Ki ± 0% +0.12% (p=0.002 n=10)
sns-batch.json 41.66Ki ± 0% 41.69Ki ± 0% ~ (p=0.082 n=9+10)
sns.json 24.94Ki ± 0%
snssqs.json 51.57Ki ± 0%
snssqs_no_dd_context.json 46.34Ki ± 1%
sqs-aws-header.json 19.40Ki ± ∞ ¹
geomean 25.28Ki 26.31Ki +0.02%
¹ need >= 6 samples for confidence interval at level 0.95
│ previous │ current │
│ allocs/op │ allocs/op vs base │
api-gateway-appsec.json 629.0 ± 0% 629.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway-kong-appsec.json 487.0 ± 0% 487.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway-kong.json 465.0 ± 0% 465.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway-non-proxy-async.json 723.0 ± 0% 723.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway-non-proxy.json 713.0 ± 0% 713.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway-websocket-connect.json 451.0 ± 0% 451.0 ± 0% ~ (p=1.000 n=10)
api-gateway-websocket-default.json 376.0 ± 0% 376.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway-websocket-disconnect.json 366.0 ± 0% 366.0 ± 0% ~ (p=1.000 n=10) ¹
api-gateway.json 785.0 ± 0% 785.0 ± 0% ~ (p=1.000 n=10) ¹
application-load-balancer.json 348.0 ± 0% 348.0 ± 0% ~ (p=1.000 n=10) ¹
cloudfront.json 280.0 ± 0% 280.0 ± 0% ~ (p=1.000 n=10) ¹
cloudwatch-events.json 217.0 ± 0% 217.0 ± 0% ~ (p=1.000 n=10) ¹
cloudwatch-logs.json 210.0 ± 0% 210.0 ± 0% ~ (p=1.000 n=10) ¹
custom.json 165.0 ± 0% 165.0 ± 0% ~ (p=1.000 n=10) ¹
dynamodb.json 581.0 ± 0% 581.0 ± 0% ~ (p=1.000 n=10) ¹
empty.json 156.0 ± 0% 156.0 ± 0% ~ (p=1.000 n=10) ¹
eventbridge-custom.json 249.0 ± 0% 249.0 ± 0% ~ (p=1.000 n=10) ¹
http-api.json 424.0 ± 0% 424.0 ± 0% ~ (p=0.474 n=10)
kinesis-batch.json 382.0 ± 0% 382.0 ± 0% ~ (p=1.000 n=10)
kinesis.json 278.0 ± 0% 278.0 ± 0% ~ (p=0.474 n=10)
s3.json 350.0 ± 0% 351.0 ± 0% +0.29% (p=0.034 n=10)
sns-batch.json 443.0 ± 0% 444.0 ± 0% +0.23% (p=0.037 n=9+10)
sns.json 315.0 ± 0%
snssqs.json 413.0 ± 1%
snssqs_no_dd_context.json 387.0 ± 2%
sqs-aws-header.json 265.0 ± ∞ ²
geomean 374.5 369.0 +0.02%
¹ all samples are equal
² need >= 6 samples for confidence interval at level 0.95
@hush-hush do you plan on updating golangci-lint
to 1.55.2 in the deps or not ?
I think you would just have to change internal/tools/go.mod
(but that might reveal some other issues :smile:)
@hush-hush do you plan on updating
golangci-lint
to 1.55.2 in the deps or not ? I think you would just have to changeinternal/tools/go.mod
(but that might reveal some other issues 😄)
I'll leave that to platform, those are just the one I got while working on the agent.
/merge
:steam_locomotive: MergeQueue
Pull request added to the queue.
There are 3 builds ahead! (estimated merge in less than 2h)
Use /merge -c
to cancel this operation!