origin
origin copied to clipboard
WIP kubeapiserver auditloganalyzer: spot handler panics in audit log
Don't let any useragent cause too many panics in apiserver
Job Failure Risk Analysis for sha: 341a9c57e05f511516436c715207f2908ef521e4
| Job Name | Failure Risk |
|---|---|
| pull-ci-openshift-origin-master-e2e-gcp-ovn-upgrade | IncompleteTests Tests for this run (20) are below the historical average (816): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems) |
| pull-ci-openshift-origin-master-e2e-gcp-ovn-rt-upgrade | IncompleteTests Tests for this run (190) are below the historical average (837): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems) |
| pull-ci-openshift-origin-master-e2e-gcp-ovn | IncompleteTests Tests for this run (103) are below the historical average (2050): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems) |
Job Failure Risk Analysis for sha: 96386f95fc6c38ce6f1603a8f3267080afb2fc3a
| Job Name | Failure Risk |
|---|---|
| pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-upgrade | Medium [sig-scheduling][Early] The openshift-console console pods [apigroup:console.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel] This test has passed 93.41% of 167 runs on release 4.18 [Architecture:amd64 FeatureSet:default Installer:ipi Network:ovn NetworkStack:ipv4 Platform:aws SecurityMode:default Topology:single Upgrade:micro] in the last week. --- [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info This test has passed 89.82% of 167 runs on release 4.18 [Architecture:amd64 FeatureSet:default Installer:ipi Network:ovn NetworkStack:ipv4 Platform:aws SecurityMode:default Topology:single Upgrade:micro] in the last week. Open Bugs alert/KubeAPIErrorBudgetBurn should not be at or above info |
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: vrutkovs Once this PR has been reviewed and has the lgtm label, please assign sosiouxme for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@vrutkovs: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-aws-ovn-ipsec-serial | 96386f95fc6c38ce6f1603a8f3267080afb2fc3a | link | false | /test e2e-aws-ovn-ipsec-serial |
| ci/prow/e2e-gcp-ovn-builds | 96386f95fc6c38ce6f1603a8f3267080afb2fc3a | link | true | /test e2e-gcp-ovn-builds |
| ci/prow/e2e-aws-ovn-kube-apiserver-rollout | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | false | /test e2e-aws-ovn-kube-apiserver-rollout |
| ci/prow/e2e-aws-ovn-single-node-upgrade | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | false | /test e2e-aws-ovn-single-node-upgrade |
| ci/prow/e2e-aws-ovn-single-node-serial | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | false | /test e2e-aws-ovn-single-node-serial |
| ci/prow/lint | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test lint |
| ci/prow/e2e-gcp-ovn | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-gcp-ovn |
| ci/prow/e2e-vsphere-ovn | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-vsphere-ovn |
| ci/prow/unit | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test unit |
| ci/prow/e2e-metal-ipi-ovn-ipv6 | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-metal-ipi-ovn-ipv6 |
| ci/prow/verify | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test verify |
| ci/prow/e2e-gcp-ovn-upgrade | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-gcp-ovn-upgrade |
| ci/prow/e2e-aws-ovn-microshift | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-aws-ovn-microshift |
| ci/prow/images | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test images |
| ci/prow/verify-deps | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test verify-deps |
| ci/prow/e2e-vsphere-ovn-upi | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-vsphere-ovn-upi |
| ci/prow/e2e-aws-ovn-edge-zones | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-aws-ovn-edge-zones |
| ci/prow/okd-scos-images | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test okd-scos-images |
| ci/prow/e2e-aws-ovn-microshift-serial | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-aws-ovn-microshift-serial |
| ci/prow/e2e-aws-ovn-serial-2of2 | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-aws-ovn-serial-2of2 |
| ci/prow/e2e-aws-ovn-serial-1of2 | 74b2b958e784a91c3bc9184266aead78aaae23e9 | link | true | /test e2e-aws-ovn-serial-1of2 |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.