cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Copy a subset of e2e templates to ClusterClass
We only have a quick start e2e test ported to ClusterClass, missing templates that use ClusterClass patching.
To validate CAPA's ClusterClass support, we need to port some of the e2e tests to ClusterClass templates.
/priority important-soon /milestone v1.x
@sedefsavas @pydctw - an intersting bit is the upgrade tests and how we're resolving to the right AMI
Relevant issue to keep an eye on: https://github.com/kubernetes-sigs/cluster-api/issues/6320
In the tests, we will need to provide all fields in a list in the templates as a workaround.
/assign
Continuing discussion from 4/4/22 office hours.
There are ~20 e2e tests in unmanaged test suites (ClusterClass is not supported for EKS cluster at the moment) and only some of tests make sense to run with ClusterClass.
Options I've been thinking
- Re-use existing tests and make it select different flavors depending on e2e configuration variable. Existing test-grid pipeline will run as is (UseClusterClassTest=false) while we can create a new pipeline that sets UseClusterClassTest=true and run ClusterClass-based tests.
Pseudo code
if UseClusterClassTest == "true" {
configCluster.Flavor = "multi-az-topology"
} else {
configCluster.Flavor = "multi-az"
}
- Create a separate test file for ClusterClass based e2e tests and run them with ClusterClass-based templates. For example, unmanaged_functional_test.go -> unmanaged_functional_test_cluster_class.go similar to how quick_start tests are set up currently (unmanaged_CAPI_quick_test.go and unmanaged_CAPI_quick_test_cluster_class.go)
We can combine tests and refactor down the road but it will take some iterations and I plan to to port tests individually to make sure existing functionalities are verified correctly first and make debugging easier.
@sedefsavas @richardcase
@pydctw Thanks for listing out the options.
I'd opt for having a separate folder for ClusterClass tests for the following reasons:
- e2e tests are very complicated and have high learning curve as it is. Combining ClusterClass logic into existing tests will make things more complicated as we might have to add customize logic specific to ClusterClass. Debugging will be easier if we have separate folders.
- Combining multiple tests into one to reduce the number of tests is something we currently try to do with each new feature to reduce (at least not increase) the number of tests. This is a good opportunity to start from scratch and bundle the tests that makes sense together instead of having an exact replica of the flavors we have.
- Since going forward we will heavily rely on ClusterClass, expecting to add different flavors for ClusterClass that we do not need to add to current tests.
- Having those tests separate will allow us to deprecate tests without affecting one another.
Thanks for the input and it makes sense. A few follow-up questions
- Where do you suggest to create a new folder? Should it be sth like
unmanaged-clusterclassunder https://github.com/kubernetes-sigs/cluster-api-provider-aws/tree/main/test/e2e/suites?
Combining multiple tests into one to reduce the number of tests is something we currently try to do with each new feature to reduce (at least not increase) the number of tests. This is a good opportunity to start from scratch and bundle the tests that makes sense together instead of having an exact replica of the flavors we have.
I agree with this goal in the long term but will start porting individual test first because 1) it is already difficult to write ClusterClass correctly, make e2e test work and confirm the parity with existing e2e test. As you said, e2e tests are very complicated and have high learning curve. 2) I don't have context to all the existing e2e tests so it will be difficult to combine them at this stage.
Having said that, my plan is once we have a few tests implemented using ClusterClass, will try to combine them as we see fit - refactoring and deleting code is easy to do once the foundation is set. I am a big believer of continuous refactoring.
If the purpose is to reduce code duplication, CAPI is doing it by having specs and calling them with different flavors instead of having flags to switch flavors.
My suggestion is not towards combining tests in the first iteration. We can have a separate file/folder and copy paste the test flavors that we want to replicate with ClusterClass. This will make bundling the tests in the next iteration easier + during debugging existing tests, we don't need to worry about ClusterClass logic.
I think we are saying the same thing. Option 1 proposed is to call different flavors as CAPI does. CAPI can do it without flags because their tests accepts flavors as an input variable while our test does not. I was trying to mimic the behavior using a config variable instead of extracting out each test as a separate file as CAPI does (https://github.com/kubernetes-sigs/cluster-api/blob/main/test/e2e/quick_start.go) and provide different flavors as an input.
Based on the discussion, I think I will start with option 2, meaning creating a separate file for ClusterClass test and see how it goes.
As mentioned, we can always change our approach down the road after a few tests are implemented.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.