origin
origin copied to clipboard
add a wait between each serial test to ensure the cluster reaches a stable state
This makes serial tests wait after each test until the clusteroperators are all progressing=false. This should help in cases where a test leaves the cluster dirty. We still prefer for the tests themselves to properly wait until the cluster is stable instead of hitting this backstop.
/hold
/assign @neisw @stbenjam
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: deads2k
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~pkg/OWNERS~~ [deads2k]
- ~~test/extended/OWNERS~~ [deads2k]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/test e2e-aws-serial
First one timed out at 4h+, not a good sign
think I found it. Trying it again.
@deads2k: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-gcp-ovn-rt-upgrade | 2ea89b2329535048425400ed31205b52924808a1 | link | false | /test e2e-gcp-ovn-rt-upgrade |
| ci/prow/e2e-aws-cgroupsv2 | 2ea89b2329535048425400ed31205b52924808a1 | link | false | /test e2e-aws-cgroupsv2 |
| ci/prow/e2e-metal-ipi-ovn-ipv6 | 2ea89b2329535048425400ed31205b52924808a1 | link | true | /test e2e-metal-ipi-ovn-ipv6 |
| ci/prow/e2e-aws-single-node-upgrade | 2ea89b2329535048425400ed31205b52924808a1 | link | false | /test e2e-aws-single-node-upgrade |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@deads2k You mentioned during stand-up that you weren't seeing your debug output, but it appears to be in the build-log: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/27196/pull-ci-openshift-origin-master-e2e-aws-serial/1532109061363863552/artifacts/e2e-aws-serial/openshift-e2e-test/build-log.txt
Is that what you were looking for?
@deads2k: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@deads2k: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-gcp-ovn-rt-upgrade | 2ea89b2329535048425400ed31205b52924808a1 | link | false | /test e2e-gcp-ovn-rt-upgrade |
| ci/prow/e2e-aws-cgroupsv2 | 2ea89b2329535048425400ed31205b52924808a1 | link | false | /test e2e-aws-cgroupsv2 |
| ci/prow/e2e-metal-ipi-ovn-ipv6 | 2ea89b2329535048425400ed31205b52924808a1 | link | true | /test e2e-metal-ipi-ovn-ipv6 |
| ci/prow/e2e-aws-single-node-upgrade | 2ea89b2329535048425400ed31205b52924808a1 | link | false | /test e2e-aws-single-node-upgrade |
| ci/prow/unit | 2ea89b2329535048425400ed31205b52924808a1 | link | true | /test unit |
| ci/prow/e2e-gcp-ovn-builds | 2ea89b2329535048425400ed31205b52924808a1 | link | true | /test e2e-gcp-ovn-builds |
| ci/prow/e2e-aws-ovn-image-registry | 2ea89b2329535048425400ed31205b52924808a1 | link | true | /test e2e-aws-ovn-image-registry |
| ci/prow/e2e-gcp-ovn-image-ecosystem | 2ea89b2329535048425400ed31205b52924808a1 | link | true | /test e2e-gcp-ovn-image-ecosystem |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.