origin icon indicating copy to clipboard operation
origin copied to clipboard

Restore retry functionality

Open stbenjam opened this issue 6 months ago • 2 comments

The default behavior of our test suites are we retry once on failure.

When migrating to OTE, I had broken retries for extension-sourced tests including kubernetes. This appeared to have largely gone unnoticed so we made it intentional in https://github.com/openshift/origin/pull/29867.

However, once we started looking at Component Readiness, the now-failing flakes compared to the previously flaked tests is resulting in dozens of regressions.

This restores the old behavior of retrying any test.

We looked at a few alternatives, including removing the concept of "flakes" entirely from Component Readiness, and instead consider them discrete results. That means a flake where one pass and one fails occurs we count it as two results.

Benefits of this approach is we have long wanted to do some heavy testing of new tests within a single job run, and we'd be able to view these results in the desired way, but when we attempted to turn this in, we ended up with 1,000+ regressions; we suspect flakes are not stable because they tend to fail for outside reasons (e.g. api server unavailability).

stbenjam avatar Jun 16 '25 15:06 stbenjam

/lgtm

dgoodwin avatar Jun 17 '25 13:06 dgoodwin

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dgoodwin, stbenjam

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • ~~OWNERS~~ [dgoodwin,stbenjam]

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

openshift-ci[bot] avatar Jun 17 '25 13:06 openshift-ci[bot]

@stbenjam: This pull request explicitly references no jira issue.

In response to this:

The default behavior of our test suites are we retry once on failure.

When migrating to OTE, I had broken retries for extension-sourced tests including kubernetes. This appeared to have largely gone unnoticed so we made it intentional in https://github.com/openshift/origin/pull/29867.

However, once we started looking at Component Readiness, the now-failing flakes compared to the previously flaked tests is resulting in dozens of regressions.

This restores the old behavior of retrying any test.

We looked at a few alternatives, including removing the concept of "flakes" entirely from Component Readiness, and instead consider them discrete results. That means a flake where one pass and one fails occurs we count it as two results.

Benefits of this approach is we have long wanted to do some heavy testing of new tests within a single job run, and we'd be able to view these results in the desired way, but when we attempted to turn this in, we ended up with 1,000+ regressions; we suspect flakes are not stable because they tend to fail for outside reasons (e.g. api server unavailability).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

openshift-ci-robot avatar Jun 17 '25 13:06 openshift-ci-robot

/retest-required

Remaining retests: 0 against base HEAD 140c6726a606a876153805cbcfe4b4b8c3dedf91 and 2 for PR HEAD 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 in total

openshift-ci-robot avatar Jun 17 '25 14:06 openshift-ci-robot

/retest-required

Remaining retests: 0 against base HEAD 140c6726a606a876153805cbcfe4b4b8c3dedf91 and 2 for PR HEAD 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 in total

openshift-ci-robot avatar Jun 17 '25 16:06 openshift-ci-robot

/retest-required

dgoodwin avatar Jun 18 '25 13:06 dgoodwin

@stbenjam: This pull request references TRT-2164 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.20.0" version, but no target version was set.

In response to this:

The default behavior of our test suites are we retry once on failure.

When migrating to OTE, I had broken retries for extension-sourced tests including kubernetes. This appeared to have largely gone unnoticed so we made it intentional in https://github.com/openshift/origin/pull/29867.

However, once we started looking at Component Readiness, the now-failing flakes compared to the previously flaked tests is resulting in dozens of regressions.

This restores the old behavior of retrying any test.

We looked at a few alternatives, including removing the concept of "flakes" entirely from Component Readiness, and instead consider them discrete results. That means a flake where one pass and one fails occurs we count it as two results.

Benefits of this approach is we have long wanted to do some heavy testing of new tests within a single job run, and we'd be able to view these results in the desired way, but when we attempted to turn this in, we ended up with 1,000+ regressions; we suspect flakes are not stable because they tend to fail for outside reasons (e.g. api server unavailability).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

openshift-ci-robot avatar Jun 18 '25 13:06 openshift-ci-robot

/skip

stbenjam avatar Jun 18 '25 13:06 stbenjam

@stbenjam: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-ovn-etcd-scaling 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-gcp-ovn-etcd-scaling
ci/prow/e2e-azure-ovn-etcd-scaling 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-azure-ovn-etcd-scaling
ci/prow/e2e-gcp-fips-serial-2of2 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-gcp-fips-serial-2of2
ci/prow/e2e-openstack-serial 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-openstack-serial
ci/prow/e2e-aws-disruptive 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-aws-disruptive
ci/prow/e2e-gcp-fips-serial-1of2 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-gcp-fips-serial-1of2
ci/prow/e2e-openstack-ovn 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-openstack-ovn
ci/prow/e2e-aws-ovn-single-node 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-aws-ovn-serial-publicnet-1of2 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-aws-ovn-serial-publicnet-1of2
ci/prow/4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade-rollback 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test 4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade-rollback
ci/prow/e2e-aws-ovn-single-node-upgrade 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-aws-ovn-single-node-upgrade
ci/prow/okd-e2e-gcp 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test okd-e2e-gcp
ci/prow/e2e-gcp-ovn-rt-upgrade 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-gcp-ovn-rt-upgrade
ci/prow/okd-scos-e2e-aws-ovn 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-gcp-csi 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-gcp-csi
ci/prow/e2e-vsphere-ovn-dualstack-primaryv6 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-vsphere-ovn-dualstack-primaryv6
ci/prow/e2e-gcp-disruptive 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-gcp-disruptive
ci/prow/e2e-vsphere-ovn-etcd-scaling 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-vsphere-ovn-etcd-scaling
ci/prow/e2e-azure-ovn-upgrade 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-azure-ovn-upgrade
ci/prow/e2e-aws-ovn-etcd-scaling 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-aws-ovn-etcd-scaling
ci/prow/e2e-aws-ovn-upgrade 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link false /test e2e-aws-ovn-upgrade
ci/prow/e2e-gcp-ovn 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link true /test e2e-gcp-ovn
ci/prow/e2e-gcp-ovn-upgrade 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4 link true /test e2e-gcp-ovn-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

openshift-ci[bot] avatar Jun 18 '25 13:06 openshift-ci[bot]

/hold

We've learned it's just 6 tests. we are thinking we push for fixes.

dgoodwin avatar Jun 18 '25 14:06 dgoodwin

Job Failure Risk Analysis for sha: 44c758ef82ed0ccdb15b95ac826cbdd6964d16e4

Job Name Failure Risk
pull-ci-openshift-origin-main-e2e-gcp-ovn IncompleteTests
Tests for this run (19) are below the historical average (1571): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-main-e2e-gcp-ovn-upgrade IncompleteTests
Tests for this run (19) are below the historical average (924): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

openshift-trt[bot] avatar Jun 18 '25 14:06 openshift-trt[bot]

/hold

stbenjam avatar Jun 22 '25 11:06 stbenjam

Closing since we're going to try to fix the underlying tests, can re-open if needed

stbenjam avatar Jun 22 '25 11:06 stbenjam