vector icon indicating copy to clipboard operation
vector copied to clipboard

feat(opentelemetry source): support trace ingestion

Open caibirdme opened this issue 1 year ago • 13 comments

This PR supports ingesting trace data in opentelemetry-protocol

Here's the example config:

sources:
  foo:
    type: opentelemetry
    grpc:
      address: 0.0.0.0:4317
    http:
      address: 0.0.0.0:4318
      keepalive:
        max_connection_age_jitter_factor: 0.1
        max_connection_age_secs: 300
sinks:
  bar:
    type: console
    inputs: [foo.traces]
    encoding:
      codec: json
  baz:
    type: console
    inputs: [foo.logs]
    encoding:
      codec: logfmt

we can use opentelemetry-trace-gen to test it.

  1. install: go install github.com/open-telemetry/opentelemetry-collector-contrib/cmd/telemetrygen@latest
  2. produce some demo traces: telemetrygen traces --otlp-insecure --duration 30s
  3. produce some demo logs: telemetrygen logs --duration 30s --otlp-insecure

caibirdme avatar Jan 27 '24 03:01 caibirdme

@caibirdme, by any chance are you also working on an opentelemetry sink for traces?

gcuberes avatar Feb 15 '24 09:02 gcuberes

@caibirdme, by any chance are you also working on an opentelemetry sink for traces?

@gcuberes In technical terms, the issue of converting data to the OpenTelemetry (OTel) format should not be difficult. The process only involves serializing the data into a specified Protocol Buffers (PB) format and sending it via gRPC or HTTP to a fixed endpoint. However, there are some challenges in terms of how to use this approach.

Firstly, users can modify the received trace data through the VRL, which makes it difficult to ensure that the final sent data still conforms to the OTel protocol specifications (some fields may be missing or have incorrect types). Currently, I have not found any method to constrain the schema of sink data.

Secondly, if we want to convert any logs into OTel traces, users need to configure all the required fields for OTel traces, which can be cumbersome in terms of usage.

If the goal is simply to receive OpenTelemetry traces and then forward them unchanged to another location using the OTel protocol, similar to a proxy, this may be an acceptable solution for some use cases but I don't know if this is what people want.

In summary, while technically feasible, there are some challenges in terms of ensuring data conformance to the OTel protocol and ease of use for the end-users.

caibirdme avatar Feb 17 '24 07:02 caibirdme

Anybody take a look on this pr?

caibirdme avatar Mar 18 '24 08:03 caibirdme

Hi, sorry this one slipped my attention. Vector has pretty poor support for traces, the only sink that can send out traces is the Datadog sink, but even then the format of those traces needs to be pretty precise. Anything else would be fairly error prone. I'm curious what your use case for this source would be?

StephenWakely avatar Mar 18 '24 17:03 StephenWakely

We're using clickhouse to store the logs&traces, so clickhouse sink is enough for us. The path looks like client -> send otlp trace/log -> vector -> vrl -> clickhouse. And we want to try loki in the future, the loki sink is also supported

caibirdme avatar Mar 19 '24 01:03 caibirdme

We're using clickhouse to store the logs&traces, so clickhouse sink is enough for us. The path looks like client -> send otlp trace/log -> vector -> vrl -> clickhouse. And we want to try loki in the future, the loki sink is also supported

Gotcha. So you are converting the trace to a log event to send it to one of those sinks? I'm ok with us adding this, it's just that the user experience will be pretty rough so I'd want to label it as experimental in the docs. Do you mind making the docs updates by adding a new "how it works" section here: https://github.com/vectordotdev/vector/blob/58a4a2ef52e606c0f9b9fa975cf114b661300584/website/cue/reference/components/sources/opentelemetry.cue#L197-L207 ?

jszwedko avatar Mar 19 '24 21:03 jszwedko

Gotcha. So you are converting the trace to a log event to send it to one of those sinks? I'm ok with us adding this, it's just that the user experience will be pretty rough so I'd want to label it as experimental in the docs. Do you mind making the docs updates by adding a new "how it works" section here:

Sure, but I don't know exactly what should be to put into how it works. You mean how the source ingests trace data and converts it into log event?

caibirdme avatar Mar 20 '24 07:03 caibirdme

Gotcha. So you are converting the trace to a log event to send it to one of those sinks? I'm ok with us adding this, it's just that the user experience will be pretty rough so I'd want to label it as experimental in the docs. Do you mind making the docs updates by adding a new "how it works" section here:

Sure, but I don't know exactly what should be to put into how it works. You mean how the source ingests trace data and converts it into log event?

Right, I would just add something like about it being experimental, have limited processing functionality available, and that the internal data format may change in the future.

jszwedko avatar Mar 20 '24 20:03 jszwedko

@StephenWakely pls look at this

caibirdme avatar Apr 24 '24 06:04 caibirdme

Not sure why that changelog job is failing. I'll ask someone to take a look.

StephenWakely avatar Apr 25 '24 15:04 StephenWakely

the failed checks seem like not related to this PR, any idea? @StephenWakely @jszwedko

caibirdme avatar May 06 '24 08:05 caibirdme

the failed checks seem like not related to this PR, any idea? @StephenWakely @jszwedko

Ah, yes, apologies if you merge in master it should fix that CI issue.

jszwedko avatar May 06 '24 13:05 jszwedko

@StephenWakely pls have a look

caibirdme avatar May 13 '24 03:05 caibirdme

Regression Detector Results

Run ID: d90a9082-d1dd-4698-a480-09d814abef26 Baseline: 234b126f733472df87caa4cec23be6e4396c05de Comparison: d3d3e940996537c377b9a568fd8884887301f8cb Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +6.22 [+6.09, +6.34]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -5.40 [-5.50, -5.30]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +6.22 [+6.09, +6.34]
socket_to_socket_blackhole ingress throughput +1.55 [+1.48, +1.62]
datadog_agent_remap_blackhole_acks ingress throughput +1.51 [+1.42, +1.61]
datadog_agent_remap_datadog_logs ingress throughput +0.88 [+0.77, +0.98]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.84 [+0.76, +0.92]
otlp_grpc_to_blackhole ingress throughput +0.73 [+0.63, +0.82]
http_to_s3 ingress throughput +0.35 [+0.07, +0.63]
datadog_agent_remap_blackhole ingress throughput +0.34 [+0.26, +0.43]
file_to_blackhole egress throughput +0.32 [-2.11, +2.76]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
http_to_http_json ingress throughput +0.04 [-0.04, +0.11]
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.14, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.03 [-0.14, +0.09]
enterprise_http_to_http ingress throughput -0.13 [-0.21, -0.04]
splunk_hec_route_s3 ingress throughput -0.26 [-0.71, +0.19]
syslog_log2metric_humio_metrics ingress throughput -0.32 [-0.42, -0.22]
http_to_http_acks ingress throughput -0.38 [-1.73, +0.97]
http_elasticsearch ingress throughput -0.58 [-0.65, -0.51]
syslog_regex_logs2metric_ddmetrics ingress throughput -1.19 [-1.27, -1.11]
http_text_to_http_json ingress throughput -1.22 [-1.33, -1.11]
fluent_elasticsearch ingress throughput -1.43 [-1.91, -0.96]
syslog_loki ingress throughput -3.85 [-3.90, -3.80]
syslog_humio_logs ingress throughput -4.06 [-4.18, -3.93]
syslog_log2metric_splunk_hec_metrics ingress throughput -4.18 [-4.33, -4.03]
syslog_splunk_hec_logs ingress throughput -4.18 [-4.27, -4.10]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -5.40 [-5.50, -5.30]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar May 30 '24 12:05 github-actions[bot]

Regression Detector Results

Run ID: 26048d96-e73d-4d56-8799-d90e07cb4421 Baseline: 234b126f733472df87caa4cec23be6e4396c05de Comparison: 74f903120b16bdd24f5d1eefdc1e27efd5f7fe95 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
syslog_humio_logs ingress throughput -5.32 [-5.47, -5.17]
syslog_splunk_hec_logs ingress throughput -5.61 [-5.70, -5.52]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -6.37 [-6.50, -6.24]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_humio_metrics ingress throughput +4.16 [+4.01, +4.30]
otlp_http_to_blackhole ingress throughput +3.82 [+3.68, +3.96]
datadog_agent_remap_blackhole ingress throughput +2.16 [+2.06, +2.25]
datadog_agent_remap_blackhole_acks ingress throughput +1.13 [+1.01, +1.24]
socket_to_socket_blackhole ingress throughput +1.09 [+1.03, +1.14]
file_to_blackhole egress throughput +0.93 [-1.58, +3.45]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.88 [+0.80, +0.97]
otlp_grpc_to_blackhole ingress throughput +0.79 [+0.70, +0.89]
http_elasticsearch ingress throughput +0.46 [+0.36, +0.57]
splunk_hec_route_s3 ingress throughput +0.44 [-0.02, +0.89]
http_to_http_acks ingress throughput +0.21 [-1.15, +1.57]
http_to_http_noack ingress throughput +0.09 [-0.00, +0.19]
http_to_http_json ingress throughput +0.05 [-0.02, +0.13]
fluent_elasticsearch ingress throughput +0.02 [-0.46, +0.50]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.01 [-0.12, +0.10]
datadog_agent_remap_datadog_logs ingress throughput -0.02 [-0.12, +0.09]
http_to_s3 ingress throughput -0.04 [-0.32, +0.25]
enterprise_http_to_http ingress throughput -0.13 [-0.21, -0.06]
syslog_regex_logs2metric_ddmetrics ingress throughput -0.38 [-0.51, -0.25]
http_text_to_http_json ingress throughput -0.86 [-0.98, -0.74]
syslog_log2metric_splunk_hec_metrics ingress throughput -3.81 [-3.95, -3.67]
syslog_loki ingress throughput -4.29 [-4.36, -4.22]
syslog_humio_logs ingress throughput -5.32 [-5.47, -5.17]
syslog_splunk_hec_logs ingress throughput -5.61 [-5.70, -5.52]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -6.37 [-6.50, -6.24]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar May 30 '24 14:05 github-actions[bot]

Regression Detector Results

Run ID: 71d27108-0499-4b38-b646-71f9e463d748 Baseline: 234b126f733472df87caa4cec23be6e4396c05de Comparison: 4a31e4e402245167b9a3df672b7e5de65e384fc0 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -6.06 [-6.18, -5.94]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +4.49 [+4.38, +4.61]
otlp_grpc_to_blackhole ingress throughput +2.02 [+1.92, +2.11]
datadog_agent_remap_blackhole ingress throughput +1.61 [+1.52, +1.71]
socket_to_socket_blackhole ingress throughput +0.76 [+0.70, +0.83]
http_text_to_http_json ingress throughput +0.51 [+0.39, +0.63]
fluent_elasticsearch ingress throughput +0.30 [-0.18, +0.78]
http_to_s3 ingress throughput +0.21 [-0.07, +0.49]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.19 [+0.11, +0.27]
http_to_http_acks ingress throughput +0.18 [-1.18, +1.54]
http_to_http_noack ingress throughput +0.10 [+0.02, +0.18]
http_to_http_json ingress throughput +0.03 [-0.05, +0.10]
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.01 [-0.14, +0.13]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.06 [-0.17, +0.05]
enterprise_http_to_http ingress throughput -0.07 [-0.15, +0.02]
http_elasticsearch ingress throughput -0.53 [-0.60, -0.45]
syslog_log2metric_humio_metrics ingress throughput -0.73 [-0.86, -0.60]
file_to_blackhole egress throughput -0.95 [-3.44, +1.55]
datadog_agent_remap_blackhole_acks ingress throughput -1.22 [-1.30, -1.13]
datadog_agent_remap_datadog_logs ingress throughput -1.23 [-1.34, -1.12]
syslog_regex_logs2metric_ddmetrics ingress throughput -1.41 [-1.50, -1.31]
splunk_hec_route_s3 ingress throughput -1.46 [-1.92, -1.00]
syslog_humio_logs ingress throughput -2.62 [-2.74, -2.50]
syslog_log2metric_splunk_hec_metrics ingress throughput -4.56 [-4.70, -4.41]
syslog_loki ingress throughput -4.80 [-4.85, -4.75]
syslog_splunk_hec_logs ingress throughput -4.96 [-5.06, -4.86]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -6.06 [-6.18, -5.94]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar May 30 '24 14:05 github-actions[bot]

Regression Detector Results

Run ID: 5681deea-26f8-4dfa-9ba5-bc7cdf745941 Baseline: 4a4fc2e9162ece483365959f5222fc5a38d1dad9 Comparison: eb67ee0a12149f2def5f647c02df4c2d0479fa84 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
syslog_splunk_hec_logs ingress throughput -5.22 [-5.28, -5.16]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.43 [-5.59, -5.27]
syslog_loki ingress throughput -7.18 [-7.25, -7.11]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +4.67 [+4.54, +4.81]
splunk_hec_route_s3 ingress throughput +2.70 [+2.22, +3.18]
file_to_blackhole egress throughput +2.49 [+0.03, +4.96]
otlp_grpc_to_blackhole ingress throughput +2.47 [+2.38, +2.56]
http_text_to_http_json ingress throughput +2.44 [+2.32, +2.57]
syslog_log2metric_humio_metrics ingress throughput +2.00 [+1.84, +2.17]
socket_to_socket_blackhole ingress throughput +1.05 [+0.96, +1.13]
http_to_http_acks ingress throughput +0.96 [-0.40, +2.33]
datadog_agent_remap_datadog_logs ingress throughput +0.80 [+0.68, +0.92]
fluent_elasticsearch ingress throughput +0.58 [+0.09, +1.06]
http_to_s3 ingress throughput +0.34 [+0.06, +0.62]
datadog_agent_remap_blackhole_acks ingress throughput +0.10 [+0.00, +0.19]
http_to_http_noack ingress throughput +0.09 [-0.01, +0.19]
http_to_http_json ingress throughput +0.04 [-0.04, +0.12]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput -0.01 [-0.15, +0.14]
datadog_agent_remap_datadog_logs_acks ingress throughput -0.03 [-0.11, +0.06]
enterprise_http_to_http ingress throughput -0.03 [-0.13, +0.06]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.05 [-0.16, +0.07]
http_elasticsearch ingress throughput -0.15 [-0.22, -0.08]
syslog_regex_logs2metric_ddmetrics ingress throughput -0.78 [-0.94, -0.62]
datadog_agent_remap_blackhole ingress throughput -1.66 [-1.74, -1.57]
syslog_humio_logs ingress throughput -1.68 [-1.80, -1.56]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -2.83 [-2.96, -2.71]
syslog_splunk_hec_logs ingress throughput -5.22 [-5.28, -5.16]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.43 [-5.59, -5.27]
syslog_loki ingress throughput -7.18 [-7.25, -7.11]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar May 30 '24 16:05 github-actions[bot]

Regression Detector Results

Run ID: b472ba00-ef5c-4d15-974d-c0c39c40e873 Baseline: 4a4fc2e9162ece483365959f5222fc5a38d1dad9 Comparison: b1d604303810e5ca50d32c39f9ca40fbef40a893 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.75 [+5.62, +5.88]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.07 [-5.21, -4.93]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.75 [+5.62, +5.88]
syslog_log2metric_humio_metrics ingress throughput +1.86 [+1.78, +1.94]
otlp_grpc_to_blackhole ingress throughput +1.33 [+1.24, +1.42]
splunk_hec_route_s3 ingress throughput +0.97 [+0.52, +1.42]
http_text_to_http_json ingress throughput +0.56 [+0.45, +0.68]
datadog_agent_remap_blackhole_acks ingress throughput +0.49 [+0.41, +0.58]
socket_to_socket_blackhole ingress throughput +0.46 [+0.38, +0.54]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.23 [+0.15, +0.31]
http_to_s3 ingress throughput +0.15 [-0.13, +0.43]
datadog_agent_remap_blackhole ingress throughput +0.14 [+0.04, +0.23]
http_to_http_noack ingress throughput +0.13 [+0.05, +0.20]
http_to_http_json ingress throughput +0.06 [-0.02, +0.14]
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.14, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput -0.01 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.06 [-0.17, +0.06]
http_to_http_acks ingress throughput -0.10 [-1.47, +1.26]
enterprise_http_to_http ingress throughput -0.11 [-0.20, -0.01]
datadog_agent_remap_datadog_logs ingress throughput -0.36 [-0.46, -0.25]
fluent_elasticsearch ingress throughput -0.47 [-0.95, +0.01]
file_to_blackhole egress throughput -0.49 [-2.99, +2.01]
http_elasticsearch ingress throughput -0.96 [-1.03, -0.89]
syslog_splunk_hec_logs ingress throughput -3.47 [-3.53, -3.42]
syslog_regex_logs2metric_ddmetrics ingress throughput -3.88 [-3.97, -3.80]
syslog_humio_logs ingress throughput -4.14 [-4.23, -4.04]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -4.41 [-4.53, -4.29]
syslog_loki ingress throughput -4.41 [-4.46, -4.36]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.07 [-5.21, -4.93]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar Jun 03 '24 09:06 github-actions[bot]

Regression Detector Results

Run ID: 8bac803b-8bb3-46ab-b00a-d3ff6f4159a0 Baseline: 4a4fc2e9162ece483365959f5222fc5a38d1dad9 Comparison: 911824fb91e1f81578364eb9b2a234a4f5d2245b Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.21 [+5.08, +5.34]
syslog_splunk_hec_logs ingress throughput -5.21 [-5.26, -5.16]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -5.75 [-5.87, -5.63]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.21 [+5.08, +5.34]
otlp_grpc_to_blackhole ingress throughput +1.86 [+1.77, +1.95]
file_to_blackhole egress throughput +1.52 [-1.02, +4.06]
syslog_log2metric_humio_metrics ingress throughput +1.35 [+1.25, +1.44]
socket_to_socket_blackhole ingress throughput +1.22 [+1.15, +1.29]
http_text_to_http_json ingress throughput +0.38 [+0.26, +0.51]
syslog_regex_logs2metric_ddmetrics ingress throughput +0.30 [+0.17, +0.43]
datadog_agent_remap_blackhole ingress throughput +0.29 [+0.20, +0.37]
datadog_agent_remap_blackhole_acks ingress throughput +0.26 [+0.17, +0.35]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.12 [+0.04, +0.21]
http_to_s3 ingress throughput +0.08 [-0.20, +0.36]
http_to_http_json ingress throughput +0.05 [-0.03, +0.13]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.14, +0.15]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
datadog_agent_remap_datadog_logs ingress throughput -0.00 [-0.11, +0.10]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.03 [-0.15, +0.08]
enterprise_http_to_http ingress throughput -0.08 [-0.14, -0.03]
http_to_http_acks ingress throughput -0.29 [-1.65, +1.07]
splunk_hec_route_s3 ingress throughput -0.62 [-1.07, -0.17]
fluent_elasticsearch ingress throughput -0.79 [-1.27, -0.32]
http_elasticsearch ingress throughput -0.85 [-0.92, -0.78]
syslog_humio_logs ingress throughput -3.31 [-3.43, -3.20]
syslog_log2metric_splunk_hec_metrics ingress throughput -3.85 [-3.98, -3.71]
syslog_loki ingress throughput -4.87 [-4.92, -4.81]
syslog_splunk_hec_logs ingress throughput -5.21 [-5.26, -5.16]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -5.75 [-5.87, -5.63]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar Jun 03 '24 11:06 github-actions[bot]

These regression tests should really not keep failing...

StephenWakely avatar Jun 03 '24 13:06 StephenWakely

Regression Detector Results

Run ID: 0d3c08af-975e-487d-b242-57a79b63824d Baseline: 3da355b0bde93ed2afa643c53e32b84ac387fd4f Comparison: 7ce8fd5abc67e55c986879e078be179eb8ed8ec9 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_splunk_hec_metrics ingress throughput +3.08 [+2.93, +3.22]
syslog_splunk_hec_logs ingress throughput +1.50 [+1.45, +1.56]
syslog_humio_logs ingress throughput +1.42 [+1.29, +1.55]
datadog_agent_remap_datadog_logs_acks ingress throughput +1.36 [+1.28, +1.44]
syslog_regex_logs2metric_ddmetrics ingress throughput +1.30 [+1.21, +1.38]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput +1.12 [+1.01, +1.23]
fluent_elasticsearch ingress throughput +0.67 [+0.18, +1.15]
syslog_loki ingress throughput +0.49 [+0.41, +0.57]
datadog_agent_remap_blackhole_acks ingress throughput +0.41 [+0.30, +0.51]
datadog_agent_remap_datadog_logs ingress throughput +0.31 [+0.20, +0.41]
http_to_http_acks ingress throughput +0.28 [-1.08, +1.65]
http_elasticsearch ingress throughput +0.25 [+0.18, +0.32]
http_to_http_noack ingress throughput +0.12 [+0.02, +0.21]
http_to_http_json ingress throughput +0.05 [-0.03, +0.13]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.15, +0.15]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.03 [-0.14, +0.08]
enterprise_http_to_http ingress throughput -0.06 [-0.14, +0.02]
http_to_s3 ingress throughput -0.33 [-0.60, -0.05]
otlp_grpc_to_blackhole ingress throughput -0.35 [-0.44, -0.26]
syslog_log2metric_humio_metrics ingress throughput -0.89 [-0.98, -0.80]
http_text_to_http_json ingress throughput -0.94 [-1.06, -0.83]
file_to_blackhole egress throughput -1.01 [-3.46, +1.45]
splunk_hec_route_s3 ingress throughput -1.15 [-1.61, -0.69]
socket_to_socket_blackhole ingress throughput -1.30 [-1.38, -1.21]
datadog_agent_remap_blackhole ingress throughput -1.53 [-1.64, -1.42]
otlp_http_to_blackhole ingress throughput -2.11 [-2.23, -1.99]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar Jun 03 '24 16:06 github-actions[bot]

It's weird... My code should only affect opentelemetry-source

caibirdme avatar Jun 04 '24 02:06 caibirdme

This is now failing with doc tests:

failures:

---- target/debug/build/opentelemetry-proto-d2ae8386e81e6113/out/opentelemetry.proto.trace.v1.rs - proto::trace::v1::Span::attributes (line 123) stdout ----
error: expected one of `.`, `;`, `?`, `}`, or an operator, found `:`
 --> target/debug/build/opentelemetry-proto-d2ae8386e81e6113/out/opentelemetry.proto.trace.v1.rs:124:19
  |
3 | "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
  |                   ^ expected one of `.`, `;`, `?`, `}`, or an operator

error: aborting due to 1 previous error

On this doc comment in the generated proto file:

    /// attributes is a collection of key/value pairs. Note, global attributes
    /// like server name can be set using the resource API. Examples of attributes:
    ///
    ///      "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
    ///      "/http/server_latency": 300
    ///      "example.com/my_attribute": true
    ///      "example.com/score": 10.239
    ///
    /// The OpenTelemetry API specification further restricts the allowed value types:
    /// <https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md#attribute>
    /// Attribute keys MUST be unique (it is not allowed to have more than one
    /// attribute with the same key).
    #[prost(message, repeated, tag = "9")]

I'm not sure I understand why..

StephenWakely avatar Jun 04 '24 10:06 StephenWakely

Regression Detector Results

Run ID: 7db41c8d-c248-43c7-8f89-e4c83d33f959 Baseline: 3da355b0bde93ed2afa643c53e32b84ac387fd4f Comparison: d1d122e2aa8a29c1d505ca71c9e382eb3ad06691 Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_splunk_hec_logs ingress throughput +2.32 [+2.27, +2.37]
syslog_loki ingress throughput +1.96 [+1.91, +2.01]
syslog_humio_logs ingress throughput +1.88 [+1.76, +2.00]
http_to_http_acks ingress throughput +1.24 [-0.13, +2.60]
http_elasticsearch ingress throughput +1.21 [+1.13, +1.29]
datadog_agent_remap_datadog_logs ingress throughput +0.85 [+0.75, +0.96]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.79 [+0.71, +0.87]
fluent_elasticsearch ingress throughput +0.78 [+0.31, +1.26]
syslog_regex_logs2metric_ddmetrics ingress throughput +0.71 [+0.63, +0.78]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
http_to_http_json ingress throughput +0.01 [-0.06, +0.08]
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.04 [-0.16, +0.07]
enterprise_http_to_http ingress throughput -0.07 [-0.12, -0.02]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -0.14 [-0.25, -0.02]
http_to_s3 ingress throughput -0.20 [-0.47, +0.08]
syslog_log2metric_splunk_hec_metrics ingress throughput -0.37 [-0.50, -0.23]
file_to_blackhole egress throughput -0.52 [-2.99, +1.95]
syslog_log2metric_humio_metrics ingress throughput -0.68 [-0.79, -0.57]
socket_to_socket_blackhole ingress throughput -0.71 [-0.79, -0.64]
datadog_agent_remap_blackhole ingress throughput -0.90 [-1.00, -0.79]
otlp_http_to_blackhole ingress throughput -0.95 [-1.09, -0.82]
otlp_grpc_to_blackhole ingress throughput -0.98 [-1.07, -0.89]
datadog_agent_remap_blackhole_acks ingress throughput -1.17 [-1.26, -1.07]
http_text_to_http_json ingress throughput -1.27 [-1.40, -1.14]
splunk_hec_route_s3 ingress throughput -2.20 [-2.65, -1.74]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

github-actions[bot] avatar Jun 04 '24 15:06 github-actions[bot]