origin
origin copied to clipboard
include storage and must-gather tests in expectedTestCount
This pull request is intends to address https://github.com/openshift/origin/issues/27350 . It appears that when https://github.com/openshift/origin/commit/a42d174f9f93110da7b1863d58ab7a21fd056513 was introduced, the storage and must-gather test counts were omitted from the expected test count.
/hold
confirmed expected test count matches the number of tests run in https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/27356/pull-ci-openshift-origin-master-e2e-gcp/1559628405375242240/artifacts/e2e-gcp/openshift-e2e-test/build-log.txt
/hold cancel
/assign @mtulio
@rvanderp3 thanks to handle it!! /lgtm
/assign @spadgett
Complementing the example on OpenShift Provider Certification Tool, which uses openshift-tests as backend. The OPCT when starting the execution gets the total value from --dry-run, then uses the output of the openshift-tests to the CLI.
On the momentum [1], the total counter is correct, then when the execution starts it resets to lower[1], then it starts increasing with the index until the last job[2]
[1] https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31267/rehearse-31267-periodic-ci-redhat-openshift-ecosystem-provider-certification-tool-main-provider-certification-tool-vsphere/1559968506110283776#1:build-log.txt%3A711-718
[2] https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31267/rehearse-31267-periodic-ci-redhat-openshift-ecosystem-provider-certification-tool-main-provider-certification-tool-vsphere/1559968506110283776#1:build-log.txt%3A2935-2943
/retest
/retest
Hi @spadgett @mfojtik would you mind taking a look at this one. Thanks!
/test e2e-aws-ovn-fips /test e2e-aws-ovn-serial
/retest-required
Hi @bparees @spadgett we are trying to resolve this for the provider certification effort. Would you mind taking a look? Thanks!
/approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: bparees, mtulio, rvanderp3
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~pkg/OWNERS~~ [bparees]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/retest-required
Remaining retests: 0 against base HEAD 860cb11902da2bb0b9f935c982b3d687363b33e2 and 2 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
/retest-required
Remaining retests: 0 against base HEAD a86cbc2fa5f7d8d48f58f6256ed4c885cc2df27a and 1 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
/retest-required
Remaining retests: 0 against base HEAD 2053a30e50466efe7a157298ed86b7ba3879b285 and 0 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
/hold
Revision 8994033808fe578e25db8d4a04f664a71126ac13 was retested 3 times: holding
/hold cancel /retest-required
/retest-required
Remaining retests: 0 against base HEAD 2053a30e50466efe7a157298ed86b7ba3879b285 and 2 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
e2e-aws-ovn-serial appears to be permafail :(
/retest-required
Remaining retests: 0 against base HEAD 31c1187f48c4a7762c6954b34d5cd91acbc4d461 and 1 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
/retest-required
Remaining retests: 0 against base HEAD d2336da65e9e0ad93073887e69e2161ba993d9e8 and 0 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
/hold
Revision 8994033808fe578e25db8d4a04f664a71126ac13 was retested 3 times: holding
/hold cancel @bparees it looks like e2e-aws-ovn-serial is permafailing and doesn't appear to be related to this change. Would it be possible skip?
/retest-required
Remaining retests: 0 against base HEAD d2336da65e9e0ad93073887e69e2161ba993d9e8 and 2 for PR HEAD 8994033808fe578e25db8d4a04f664a71126ac13 in total
@rvanderp3 is there a bug reported for it? is the networking team aware? i agree we can override it for this PR (i.e. isn't impacted by this PR) but we should ensure someone is working to fix it before we override.
@rvanderp3 is there a bug reported for it? is the networking team aware? i agree we can override it for this PR (i.e. isn't impacted by this PR) but we should ensure someone is working to fix it before we override.
yeah, that makes sense. I'll follow up on that.
@rvanderp3 is there a bug reported for it? is the networking team aware? i agree we can override it for this PR (i.e. isn't impacted by this PR) but we should ensure someone is working to fix it before we override.
yeah, that makes sense. I'll follow up on that.
@bparees yes, it appears this is a known issue and a fix is awaiting merge. Apologies on that, I should have checked that prior to recommending the skip.