policy-server icon indicating copy to clipboard operation
policy-server copied to clipboard

OpenTelemetry flaky test

Open flavio opened this issue 1 year ago • 2 comments

The integration test covering OpenTelemetry has been flaky during some runs of the GH CI.

This is the error reported:

---- test_otel stdout ----
Cannot connect to Kubernetes cluster: Failed to infer configuration: failed to infer config: in-cluster: (failed to read an incluster environment variable: environment variable not found), kubeconfig: (failed to read kubeconfig from '"/home/runner/.kube/config"': No such file or directory (os error 2))
Cannot connect to Kubernetes, context aware policies will not work properly
thread 'test_otel' panicked at tests/integration_test.rs:798:5:
assertion failed: metrics["metrics"].as_array().unwrap().iter().any(|m|
        { m["name"] == "kubewarden_policy_evaluation_latency_milliseconds" })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

flavio avatar Sep 20 '24 13:09 flavio

We cannot reproduce the flaky test, we've introduced more debugging information to better understand what's going on.

Moving away from the 'todo' column, we will wait for the failure to happen again.

flavio avatar Sep 25 '24 13:09 flavio

I can see this error when I run the tests locally:

test test_otel ... FAILED

failures:

---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:815:5:
metrics_output_json: {
  "resourceMetrics": [
    {
      "resource": {
        "attributes": [
          {
            "key": "service.name",
            "value": {
              "stringValue": "unknown_service"
            }
          },
          {
            "key": "telemetry.sdk.name",
            "value": {
              "stringValue": "opentelemetry"
            }
          },
          {
            "key": "telemetry.sdk.version",
            "value": {
              "stringValue": "0.24.1"
            }
          },
          {
            "key": "telemetry.sdk.language",
            "value": {
              "stringValue": "rust"
            }
          }
        ]
      },
      "scopeMetrics": [
        {
          "metrics": [
            {
              "name": "kubewarden_policy_evaluations_total",
              "sum": {
                "aggregationTemporality": 2,
                "dataPoints": [
                  {
                    "asInt": "1",
                    "attributes": [
                      {
                        "key": "initialization_error",
                        "value": {
                          "stringValue": "bootstrap failure: wrong_url: Error while downloading policy 'wrong_url' from ghcr.io/kubewarden/tests/not_existing:v0.1.0: Fail to interact with OCI registry: Registry error: url https://ghcr.io/v2/kubewarden/tests/not_existing/manifests/v0.1.0, envelope: OCI API errors: [OCI API error: manifest unknown]"
                        }
                      },
                      {
                        "key": "policy_name",
                        "value": {
                          "stringValue": "wrong_url"
                        }
                      }
                    ],
                    "startTimeUnixNano": "1727700657372706354",
                    "timeUnixNano": "1727700657528442954"
                  }
                ],
                "isMonotonic": true
              }
            }
          ],
          "scope": {
            "name": "kubewarden"
          }
        }
      ]
    }
  ]
}
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_otel

test result: FAILED. 18 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 115.27s

error: test failed, to rerun pass `--test integration_test`
make: *** [Makefile:24: test] Error 101

jvanz avatar Sep 30 '24 12:09 jvanz

this is getting annoying, let's try to sort this out in time for 1.22

flavio avatar Feb 14 '25 14:02 flavio

bumping the otel collector testcontainer to the latest version (0.119.0 at the time of writing) produces a different set of errors:

2025-02-17T17:10:08.612342Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: Connection reset by peer (os error 104)\" })" Failed during the export process
2025-02-17T17:10:08.613382Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
2025-02-17T17:10:12.114413Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
test test_otel has been running for over 60 seconds
2025-02-17T17:10:52.115248Z ERROR opentelemetry_sdk:  name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
2025-02-17T17:11:52.115397Z ERROR opentelemetry_sdk:  name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
test test_otel ... FAILED

failures:

---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:938:14:
called `Result::unwrap()` on an `Err` value: Error("EOF while parsing a value", line: 1, column: 0)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_otel

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 21 filtered out; finished in 137.64s

error: test failed, to rerun pass `--test integration_test`

We need to bump to this version before proceeding with the bug fix.

fabriziosestito avatar Feb 17 '25 17:02 fabriziosestito

Just sharing something that I've already said to some members of the team during one of our calls... I believe there is some concurrency issue in our tests. I found a way to simulate the issue running two tests with a single thread:

cargo test --workspace --  test_otel test_detect_certificate_rotation  --nocapture --test-threads 1

If the number of threads are changed from 1 to 2, the tests passes. If we ensure that the test_otel test run first, the tests also passes. When debug log level is enabled, it's possible to see that the metric reader cannot see any metric to export:

2025-02-24T21:23:24.410887Z DEBUG opentelemetry_sdk:  name="PeriodicReader.ExportTriggered" Export message received.                                                                                                                         
2025-02-24T21:23:24.410942Z DEBUG opentelemetry_sdk:  name="MeterProviderInvokingObservableCallbacks" count=0                                                                                                                                2025-02-24T21:23:24.410969Z DEBUG opentelemetry_sdk:  name="PeriodicReaderWorker.NoMetricsToExport"                                                                                                                                          

The traces always works. I'm wondering if the issue is not caused because the meter provider is global. Therefore, depending of the order of execution it will be proper configured or not.

jvanz avatar Feb 24 '25 21:02 jvanz

@jvanz

I'm wondering if the issue is not caused because the meter provider is global. Therefore, depending of the order of execution it will be proper configured or not.

This might be it, good catch!

Since opentelemetry relies on a global state to configure metrics and tracing, a simple fix could be isolating the test with features.

Cargo.toml:

[features]
otel_tests = []

Integration test:

#[test]
#[cfg(feature = "otel_tests")]
fn test_otel() {
....
}
cargo test --features otel_tests 

fabriziosestito avatar Feb 25 '25 07:02 fabriziosestito

Yes, I agree that would be a fix. But I think we should improve a little further some day. Because, as the traces and metrics are global, if one enable debug log level, all the test afterward will pollute the output or cause other issues. That's what I was trying to do for a while. But maybe it is a premature optimization for now

jvanz avatar Feb 25 '25 11:02 jvanz