Do not share config across tests
What happened?
Currently, the config object is passed as a reference across all the StepFunc of a feature. When running tests, it creates a number of issues where any test updating the config would also change it for all other tests leading to undesirable side effects.
What did you expect to happen?
I currently see 2 use cases where this is an issue:
- creating a separate namespace for each test with
-parallel. When using theenvfuncs.CreateNamespacefunc, the namespace is stored in the config. However, when the tests are running in parallel, thecfg.namespacewill be modified. There is an example going around the issue using the context however I feel like it is hacky compared to using the cfg object - changing the config for a specific test. For instance, if I want to enable
FailFaston a specifc test (because I know that when it fails there is no point continuing it), I can't enable it for this specific test because when I docfc.WithFailFast(), it is gonna apply to all the remaining tests running after.
How can we reproduce it (as minimally and precisely as possible)?
This code is enough to show the issue:
var testenv env.Environment
func TestMain(m *testing.M) {
clusterName := envconf.RandomName("cluster", 16)
testenv = env.New()
testenv.
Setup(envfuncs.CreateCluster(kind.NewProvider(), clusterName)).
BeforeEachTest(
func(ctx context.Context, cfg *envconf.Config, t *testing.T) (context.Context, error) {
t.Parallel()
namespace := envconf.RandomName("ns", 16)
return envfuncs.CreateNamespace(namespace)(ctx, cfg)
},
).
Finish(envfuncs.DestroyCluster(clusterName))
os.Exit(testenv.Run(m))
}
func Test1(t *testing.T) {
feat := features.New("Test1").Assess("Test1", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
time.Sleep(1 * time.Second) // sleep to wait for test 2
t.Log("Namespace 1: ", cfg.Namespace())
return ctx
}).Feature()
testenv.Test(t, feat)
}
func Test2(t *testing.T) {
feat := features.New("Test1").Assess("Test1", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
t.Log("Namespace 2: ", cfg.Namespace())
return ctx
}).Feature()
testenv.Test(t, feat)
}
Result:
=== RUN Test1
=== PAUSE Test1
=== RUN Test2
=== PAUSE Test2
=== CONT Test1
=== CONT Test2
=== RUN Test2/Test1
=== RUN Test1/Test1
=== RUN Test2/Test1/Test1
=== RUN Test1/Test1/Test1
=== NAME Test2/Test1/Test1
main_test.go:64: Namespace 2: ns-6ef56304fde1a
--- PASS: Test2 (0.02s)
--- PASS: Test2/Test1 (0.00s)
--- PASS: Test2/Test1/Test1 (0.00s)
=== NAME Test1/Test1/Test1
main_test.go:55: Namespace 1: ns-6ef56304fde1a
--- PASS: Test1 (1.02s)
--- PASS: Test1/Test1 (1.00s)
--- PASS: Test1/Test1/Test1 (1.00s)
PASS
Anything elese we need to know?
I think what we could do would be to make copies of the config at different steps of the tests (before each test, before each feature, before each assess) and only share the copy with the child element (test config -> feature config -> assess config). This way, each step could override the cfg without any risk of impacting the parent or cousins (what happens in the assess would not impact what happens in the other assess or the feature). However, modifying the feature config could impact all the assess in the feature. What do you think about this?
E2E Provider Used
real cluster
e2e-framework Version
v0.3.0
OS version
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
Hi @Fricounet Apologies for the late reply. Originally the API was designed for one config per test. As it evolves, it became apparent that folks would want to have different configurations for different tests.
So a fix may require a substantial redesign or introduce new methods to better share or create new config per tests if necessary. Is that something you think you can help out with?
Referencing https://github.com/kubernetes-sigs/e2e-framework/pull/367 for reach.
Hey @vladimirvivien I actually just found the time to do changes for this I opened #396 to solve this :smile:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale