[release-4.19] OCPBUGS-59868: spyglass: hide disruption events for localhost
This is an automated cherry-pick of #29710
/assign wangke19
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: openshift-cherrypick-robot Once this PR has been reviewed and has the lgtm label, please assign xueqzhan for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@openshift-cherrypick-robot: Jira Issue OCPBUGS-55238 has been cloned as Jira Issue OCPBUGS-59868. Will retitle bug to link to clone. /retitle [release-4.19] OCPBUGS-59868: spyglass: hide disruption events for localhost
In response to this:
This is an automated cherry-pick of #29710
/assign wangke19
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.
@openshift-cherrypick-robot: This pull request references Jira Issue OCPBUGS-59868, which is invalid:
- release note text must be set and not match the template OR release note type must be set to "Release Note Not Required". For more information you can reference the OpenShift Bug Process.
- expected dependent Jira Issue OCPBUGS-55238 to target a version in 4.20.0, but it targets "4.19.z" instead
Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.
The bug has been updated to refer to the pull request using the external bug tracker.
In response to this:
This is an automated cherry-pick of #29710
/assign wangke19
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.
@openshift-cherrypick-robot: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-aws-ovn-single-node-upgrade | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn-single-node-upgrade |
| ci/prow/e2e-aws-ovn-serial-2of2 | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | true | /test e2e-aws-ovn-serial-2of2 |
| ci/prow/e2e-vsphere-ovn-dualstack-primaryv6 | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-vsphere-ovn-dualstack-primaryv6 |
| ci/prow/e2e-metal-ipi-virtualmedia | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-metal-ipi-virtualmedia |
| ci/prow/e2e-aws-ovn | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn |
| ci/prow/e2e-aws-disruptive | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-disruptive |
| ci/prow/e2e-openstack-serial | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-openstack-serial |
| ci/prow/e2e-aws-ovn-cgroupsv2 | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn-cgroupsv2 |
| ci/prow/e2e-aws | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws |
| ci/prow/e2e-metal-ipi-ovn-kube-apiserver-rollout | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-metal-ipi-ovn-kube-apiserver-rollout |
| ci/prow/e2e-metal-ipi-serial | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-metal-ipi-serial |
| ci/prow/e2e-azure | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-azure |
| ci/prow/e2e-aws-ovn-serial-1of2 | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | true | /test e2e-aws-ovn-serial-1of2 |
| ci/prow/e2e-aws-ovn-kube-apiserver-rollout | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn-kube-apiserver-rollout |
| ci/prow/e2e-aws-ovn-single-node-serial | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn-single-node-serial |
| ci/prow/e2e-gcp-fips-serial | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-gcp-fips-serial |
| ci/prow/e2e-aws-ovn-upgrade | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn-upgrade |
| ci/prow/e2e-gcp-ovn-etcd-scaling | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-gcp-ovn-etcd-scaling |
| ci/prow/e2e-aws-ovn-etcd-scaling | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-aws-ovn-etcd-scaling |
| ci/prow/e2e-vsphere-ovn | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | true | /test e2e-vsphere-ovn |
| ci/prow/e2e-aws-ovn-edge-zones | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | true | /test e2e-aws-ovn-edge-zones |
| ci/prow/e2e-vsphere-ovn-etcd-scaling | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-vsphere-ovn-etcd-scaling |
| ci/prow/e2e-azure-ovn-etcd-scaling | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-azure-ovn-etcd-scaling |
| ci/prow/e2e-azure-ovn-upgrade | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-azure-ovn-upgrade |
| ci/prow/e2e-gcp-disruptive | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-gcp-disruptive |
| ci/prow/e2e-metal-ipi-ovn-dualstack | 6793c9f96d1e564e7a66004ffc5262b0e5781b23 | link | false | /test e2e-metal-ipi-ovn-dualstack |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale