WIP: Add disruption check for Thanos Querier API
Discussed here https://github.com/openshift/origin/pull/28737#issuecomment-2077348811
Thanos querier API is the principal/wrapping API to fetch metrics. Given all the consumers that depend on it, it would be interesting to see how it performs.
Uncertain if the SLO is achievable with the current config (a check every second) API adjustments may be needed to improve reliability, or we may need to loosen up the check
Based on Route dependency, which could increase the likelihood of false positives.
Auth isn't set; we only check for 401. A 5xx would indicate Route backend (Thanos Querier Pods) issues. We may be able to easily get token based auth to work
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: machine424 Once this PR has been reviewed and has the lgtm label, please assign bertinatto for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@machine424: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-aws-ovn-single-node | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | false | /test e2e-aws-ovn-single-node |
| ci/prow/e2e-agnostic-ovn-cmd | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | false | /test e2e-agnostic-ovn-cmd |
| ci/prow/e2e-aws-ovn-single-node-upgrade | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | false | /test e2e-aws-ovn-single-node-upgrade |
| ci/prow/okd-scos-e2e-aws-ovn | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | false | /test okd-scos-e2e-aws-ovn |
| ci/prow/e2e-aws-ovn-microshift | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | true | /test e2e-aws-ovn-microshift |
| ci/prow/e2e-vsphere-ovn-upi | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | true | /test e2e-vsphere-ovn-upi |
| ci/prow/e2e-aws-ovn-fips | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | true | /test e2e-aws-ovn-fips |
| ci/prow/e2e-aws-ovn-edge-zones | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | true | /test e2e-aws-ovn-edge-zones |
| ci/prow/e2e-aws-ovn-microshift-serial | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | true | /test e2e-aws-ovn-microshift-serial |
| ci/prow/e2e-aws-ovn-serial-2of2 | 432ee31b00f9d8dd954cffef828c6873a25a7bd0 | link | true | /test e2e-aws-ovn-serial-2of2 |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Job Failure Risk Analysis for sha: 432ee31b00f9d8dd954cffef828c6873a25a7bd0
| Job Name | Failure Risk |
|---|---|
| pull-ci-openshift-origin-main-e2e-aws-ovn-serial-2of2 | IncompleteTests Tests for this run (24) are below the historical average (1080): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems) |
Risk analysis has seen new tests most likely introduced by this PR. Please ensure that new tests meet guidelines for naming and stability.
New Test Risks for sha: 432ee31b00f9d8dd954cffef828c6873a25a7bd0
| Job Name | New Test Risk |
|---|---|
| pull-ci-openshift-origin-main-e2e-aws-ovn-microshift | High - "[Jira:"Monitoring"] monitor test thanos-querier-api-availability setup" is a new test that failed 1 time(s) against the current commit |
| pull-ci-openshift-origin-main-e2e-aws-ovn-microshift-serial | High - "[Jira:"Monitoring"] monitor test thanos-querier-api-availability setup" is a new test that failed 1 time(s) against the current commit |
New tests seen in this PR at sha: 432ee31b00f9d8dd954cffef828c6873a25a7bd0
- "[Jira:"Monitoring"] monitor test thanos-querier-api-availability cleanup" [Total: 11, Pass: 11, Fail: 0, Flake: 0]
- "[Jira:"Monitoring"] monitor test thanos-querier-api-availability collection" [Total: 11, Pass: 11, Fail: 0, Flake: 0]
- "[Jira:"Monitoring"] monitor test thanos-querier-api-availability interval construction" [Total: 11, Pass: 11, Fail: 0, Flake: 0]
- "[Jira:"Monitoring"] monitor test thanos-querier-api-availability setup" [Total: 11, Pass: 9, Fail: 2, Flake: 0]
- "[Jira:"Monitoring"] monitor test thanos-querier-api-availability test evaluation" [Total: 11, Pass: 11, Fail: 0, Flake: 0]
- "[Jira:"Monitoring"] monitor test thanos-querier-api-availability writing to storage" [Total: 11, Pass: 11, Fail: 0, Flake: 0]
- "[sig-network] there should be nearly zero single second disruptions for thanos-querier-api-new-connections" [Total: 9, Pass: 9, Fail: 0, Flake: 0]
- "[sig-network] there should be nearly zero single second disruptions for thanos-querier-api-reused-connections" [Total: 9, Pass: 9, Fail: 0, Flake: 0]
- "[sig-network] there should be reasonably few single second disruptions for thanos-querier-api-new-connections" [Total: 9, Pass: 9, Fail: 0, Flake: 0]
- "[sig-network] there should be reasonably few single second disruptions for thanos-querier-api-reused-connections" [Total: 9, Pass: 9, Fail: 0, Flake: 0]
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.