datadog-agent icon indicating copy to clipboard operation
datadog-agent copied to clipboard

USMON-718: Kafka fetch error code

Open DanielLavie opened this issue 1 year ago • 4 comments

What does this PR do?

This PR improves the USM Kafka monitoring feature by:

  • Implementing support for parsing error codes from Kafka fetch responses
  • Incorporating support for error codes in Kafka request stats and when encoding Kafka aggregation to the protobuf

Motivation

Our aim is to include error codes in the USM RED metrics for the Kafka protocol. This serves as the initial step, with the subsequent step involving parsing Kafka produce responses to also extract the error codes.

Additional Notes

  • At present, the backend does not support non-HTTP error codes. Therefore, we won't be able to view these error codes in the UI until this issue is resolved.
  • Due to the need to support Kernel 4.14, I had to make some trade-offs in code clarity to accommodate it. We'll definitely need to create better documentation for the Kafka kernel state machine, both at a high level and within the code itself.
  • Load test results can be found here. There's an increase of ~37% in CPU usage in the Kafka codepath as can be seen in the profiler. The same method is used in the HTTP codepath, I couldn't find any good optimization to implement in this context: image

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

DanielLavie avatar May 26 '24 11:05 DanielLavie

Regression Detector

Regression Detector Results

Run ID: 80ebc263-05e4-422a-aeeb-09a7bac7dbba Metrics dashboard Target profiles

Baseline: 33f8cac6b616742250e28426c1c91592201bbf36 Comparison: db98f406eb4a9ded7f1298eebd4ea3f1f0600dee

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI links
basic_py_check % cpu utilization +1.73 [-0.97, +4.43] Logs
otel_to_otel_logs ingress throughput +0.15 [-0.66, +0.96] Logs
file_tree memory utilization +0.10 [+0.06, +0.15] Logs
idle memory utilization +0.08 [+0.05, +0.11] Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.01, +0.01] Logs
uds_dogstatsd_to_api ingress throughput -0.00 [-0.00, +0.00] Logs
uds_dogstatsd_to_api_cpu % cpu utilization -0.36 [-1.26, +0.54] Logs
pycheck_1000_100byte_tags % cpu utilization -1.49 [-6.21, +3.23] Logs
tcp_syslog_to_blackhole ingress throughput -2.42 [-15.40, +10.56] Logs

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

pr-commenter[bot] avatar May 26 '24 12:05 pr-commenter[bot]

Codecov Report

Attention: Patch coverage is 85.71429% with 2 lines in your changes missing coverage. Please review.

Project coverage is 42.50%. Comparing base (4dd3f74) to head (de2d9f8).

:exclamation: Current head de2d9f8 differs from pull request most recent head 3949098

Please upload reports for the commit 3949098 to get more accurate results.

Files Patch % Lines
pkg/network/encoding/marshal/usm_kafka.go 80.00% 1 Missing and 1 partial :warning:
Additional details and impacted files
@@             Coverage Diff             @@
##             main   #25929       +/-   ##
===========================================
- Coverage   44.94%   42.50%    -2.45%     
===========================================
  Files        2354      256     -2098     
  Lines      272845    18657   -254188     
===========================================
- Hits       122639     7930   -114709     
+ Misses     140536    10368   -130168     
+ Partials     9670      359     -9311     
Flag Coverage Δ
amzn_aarch64 42.66% <85.71%> (-3.13%) :arrow_down:
centos_x86_64 42.66% <85.71%> (-3.04%) :arrow_down:
ubuntu_aarch64 42.66% <85.71%> (-3.13%) :arrow_down:
ubuntu_x86_64 42.68% <85.71%> (-3.11%) :arrow_down:
windows_amd64 46.35% <28.57%> (-4.42%) :arrow_down:

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar May 27 '24 14:05 codecov[bot]

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=38356454 --os-family=ubuntu

Note: This applies to commit db98f406

pr-commenter[bot] avatar May 27 '24 15:05 pr-commenter[bot]

Blocked on https://github.com/DataDog/dd-go/pull/139089

DanielLavie avatar Jul 01 '24 12:07 DanielLavie

/merge

DanielLavie avatar Jul 04 '24 16:07 DanielLavie

:steam_locomotive: MergeQueue: pull request added to the queue

The median merge time in main is 25m.

Use /merge -c to cancel this operation!

dd-devflow[bot] avatar Jul 04 '24 16:07 dd-devflow[bot]