vector icon indicating copy to clipboard operation
vector copied to clipboard

feat(component validation): add sink error path validation + multi config

Open neuronull opened this issue 2 years ago • 3 comments
trafficstars

closes: #16846 closes: #16847

ref: https://github.com/vectordotdev/vector/issues/18027

Notes:

  • The error metrics validation for sinks demonstrated that we need the ability to specify different component configs for different test cases. This is because, if we just use the same codec for the input runner and the component under test, it's not really possible to inject a failure unless we add a transform to the topology. Since it's a reasonable expectation that we might need to specify different config options to hit specific errors on some components (e.g. Data Volume) , that was selected as the way forward here.
    • The way this was implemented was by allowing the yaml config to set a "config_name" for a specific test case. And in the implementation of ValidatableComponet, you draw a mapping from a config and that name. If the name match isn't found by the framework, it errors out. If no name in the test case is specified, it's ok it just uses the first one that it finds that doesn't have a name specified.
  • This PR also demonstrates that the test runner is having to increasingly make some assumptions (eg. if we are testing a sink and expecting a failure, still expect to see component_receive_events etc. ). As we roll out to more components, I can see the possibility that those assumptions fall apart, or the logic gets overly complex. We may at some point need to re-evaluate the setting of expected values if that happens.

neuronull avatar Jul 21 '23 22:07 neuronull

Datadog Report

Branch report: neuronull/component_validation_sink_sad_path Commit report: 6aa38e3

:white_check_mark: vector: 0 Failed, 0 New Flaky, 1932 Passed, 0 Skipped, 1m 21.86s Wall Time

Note- despite this actually being review ready, I am marking it as blocked because we likely won't ramp someone else on the feature for a few weeks.

neuronull avatar Jul 25 '23 15:07 neuronull

Datadog Report

Branch report: neuronull/component_validation_sink_sad_path Commit report: ea53261 Test service: vector

:white_check_mark: 0 Failed, 2105 Passed, 0 Skipped, 1m 23.8s Wall Time

Datadog Report

Branch report: neuronull/component_validation_sink_sad_path Commit report: e959d9c Test service: vector

:white_check_mark: 0 Failed, 2118 Passed, 0 Skipped, 1m 24.77s Wall Time

Regression Detector Results

Run ID: a0b3860b-b639-4401-8452-c9adc951492b Baseline: 695f847d1711923261acdec0ad029185c7826521 Comparison: a6da1d8f4357513161520ae4c9fac96859d7de24 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_humio_metrics ingress throughput +4.38 [+4.24, +4.52]
syslog_regex_logs2metric_ddmetrics ingress throughput +3.77 [+3.65, +3.89]
syslog_loki ingress throughput +3.10 [+2.99, +3.20]
syslog_humio_logs ingress throughput +2.92 [+2.82, +3.03]
syslog_splunk_hec_logs ingress throughput +2.04 [+1.97, +2.11]
splunk_hec_route_s3 ingress throughput +1.89 [+1.37, +2.41]
syslog_log2metric_splunk_hec_metrics ingress throughput +1.21 [+1.07, +1.35]
datadog_agent_remap_blackhole ingress throughput +1.03 [+0.92, +1.14]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput +0.68 [+0.55, +0.81]
http_text_to_http_json ingress throughput +0.33 [+0.19, +0.47]
datadog_agent_remap_datadog_logs ingress throughput +0.26 [+0.16, +0.35]
datadog_agent_remap_blackhole_acks ingress throughput +0.18 [+0.09, +0.27]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
http_to_http_json ingress throughput +0.06 [-0.02, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.14, +0.15]
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.16, +0.16]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.05 [-0.16, +0.07]
enterprise_http_to_http ingress throughput -0.07 [-0.15, +0.01]
datadog_agent_remap_datadog_logs_acks ingress throughput -0.10 [-0.18, -0.01]
http_to_s3 ingress throughput -0.50 [-0.78, -0.22]
otlp_grpc_to_blackhole ingress throughput -0.55 [-0.64, -0.46]
fluent_elasticsearch ingress throughput -0.62 [-1.10, -0.13]
http_to_http_acks ingress throughput -0.75 [-2.05, +0.56]
otlp_http_to_blackhole ingress throughput -1.03 [-1.17, -0.88]
socket_to_socket_blackhole ingress throughput -1.30 [-1.38, -1.21]
file_to_blackhole egress throughput -2.32 [-4.88, +0.25]
http_elasticsearch ingress throughput -2.92 [-2.99, -2.85]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar Feb 22 '24 17:02 github-actions[bot]