Skipping when fail-fast is set results in failure
What happened?
See reproducer here.
Running the test defined results in the whole suite failing in the latest releases, both 0.4.0 and 0.5.0 on main branch:
go test -v -test.count 1 ./...
=== RUN TestSkip
=== RUN TestSkip/skip
=== RUN TestSkip/skip/Assess_1
main_test.go:23: Assess 1 (should be printed)
main_test.go:24: skipping Assess 1
--- FAIL: TestSkip (0.00s)
--- FAIL: TestSkip/skip (0.00s)
--- SKIP: TestSkip/skip/Assess_1 (0.00s)
FAIL
FAIL github.com/phisco/e2e-framework-test-skip 0.497s
FAIL
What did you expect to happen?
Just the assessment Assess_1 to be skipped, but the suite to succeed as in 0.3.0, see PR .
go test -v -test.count 1 ./...
=== RUN TestSkip
=== RUN TestSkip/skip
=== RUN TestSkip/skip/Assess_1
main_test.go:23: Assess 1 (should be printed)
main_test.go:24: skipping Assess 1
=== RUN TestSkip/skip/Assess_2
main_test.go:28: Assess 2 (should be printed)
=== RUN TestSkip/succeed
=== RUN TestSkip/succeed/Assess_1
main_test.go:35: Assess 1 (should be printed)
=== RUN TestSkip/succeed/Assess_2
main_test.go:39: Assess 2 (should be printed)
--- PASS: TestSkip (0.00s)
--- PASS: TestSkip/skip (0.00s)
--- SKIP: TestSkip/skip/Assess_1 (0.00s)
--- PASS: TestSkip/skip/Assess_2 (0.00s)
--- PASS: TestSkip/succeed (0.00s)
--- PASS: TestSkip/succeed/Assess_1 (0.00s)
--- PASS: TestSkip/succeed/Assess_2 (0.00s)
PASS
ok github.com/phisco/e2e-framework-test-skip 0.467s
How can we reproduce it (as minimally and precisely as possible)?
See reproducer here.
Anything elese we need to know?
I think the issue was introduced by https://github.com/kubernetes-sigs/e2e-framework/pull/391, as a t.Skip will leave shouldFailNow set to true exactly as a t.Fail here: https://github.com/kubernetes-sigs/e2e-framework/blob/72eb7e1db8c42b56b856843186f906c661d5da7a/pkg/env/env.go#L506-L530
E2E Provider Used
kind
e2e-framework Version
0.5.0
OS version
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
Cc @Fricounet as original author
Thanks for reporting this regression @phisco cc @harshanarayana
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten