cluster-api
cluster-api copied to clipboard
Support different bootstrap/control providers in the E2E test framework
User Story
As a provider developer I would like to write e2e tests that use the CAPI E2E test framework for running e2e tests against a control plane and bootstrap provider that isn't kubeadm.
Detailed Description There are various places in the CAPI E2E test framework where the bootstrap & control planes providers are hardcoded or assumed to be kubeadm. This means you can't use the test framework in E2E tests where there are different control plane or bootstrap providers (e.g. CAPA EKS). A number of examples:
- [ ] The current implementation of the
validateProvidersfunction on theE2EConfigassumes that there is only 1 control plane provider and only 1 bootstrap provider and that there are kubeadm. This stops the use of other providers like theaws-eksones. See here - [ ] The
InitManagementClusterAndWatchControllerLogsfunction assumes kubeadm. TheInitManagementClusterAndWatchControllerLogsInputshould have fields for Bootstrap & Control Plane providers that can be populated (i.e. from E2EConfig). - [ ] The DiscoverAndWaitForControlPlaneInitialized function assumes kubeadm
/kind feature
/area testing /mileston v0.4.0
@richardcase happy to know you are planning to use the E2E test framework!
- I'm +1 to change this for validating there is at least one bootstrap/controlplane provider
- is already implemented in https://github.com/kubernetes-sigs/cluster-api/pull/3708
Additionally, the DiscoverAndWaitForControlPlaneInitialized function assumes kubeadm
/help
@fabriziopandini: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone v0.4.0
/assign
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/lifecycle frozen
The DiscoverAndWaitForControlPlaneInitialized function assumes kubeadmhas been handled in https://github.com/kubernetes-sigs/cluster-api/pull/4719
/triage accepted
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/priority backlog
The Cluster API project currently lacks enough active contributors to adequately respond to all issues and PRs.
This issue is still a interesting idea but no one showed up since 2021, we can eventually reconsider if needs arise again + there are folks interested in working on it /close
@fabriziopandini: Closing this issue.
In response to this:
The Cluster API project currently lacks enough active contributors to adequately respond to all issues and PRs.
This issue is still a interesting idea but no one showed up since 2021, we can eventually reconsider if needs arise again + there are folks interested in working on it /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.