cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Divide presubmit e2e tests to shorten test duration
Our e2e test suite is growing, and as a consequence takes more time ( ~1.5 hours) to finish.
Not all the tests are giving valuable signal to run as a presubmit job, running them in periodic jobs are enough. We could divide existing tests as pr-e2e-full and pr-e2e-essential (or better naming) to run on PRs. For instance, BYO infra tests and CSI migration tests are not being affected by most changes, hence can go to only pr-e2e-full. Or clusterctl upgrade tests are only important when there is an API change in a PR.
Also, proposing to enable blocking e2e test to run for all PRs.
/priority backlog /kind testing /milestone v1.x /triage accepted
@sedefsavas: The label(s) kind/testing
cannot be applied, because the repository doesn't have them.
In response to this:
/priority backlog /kind testing /milestone v1.x /triage accepted
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@sedefsavas Is this as simple as adding a 2nd here https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/cluster-api-provider-aws/cluster-api-provider-aws-presubmits.yaml that uses something more focused than scripts/ci-e2e.sh?
Is there a list of what we'd consider essential?
@sedefsavas Is this as simple as adding a 2nd Yes, we can use Gingko Focus for selecting the tests. We add certain label to test names in this repo then use that label for essential tests.
There is no specific definition I have in mind, for example we have 3 different cluster upgrade tests that are taking too much time, we could only run 1 of them and get a good signal still, especially for small PRs. Idea is to cover different logic with as little tests as possible to get enough signal with essential tests.
Looking at the test grid, following ones are good candidates for essentials:
- Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest
- Machine Pool Spec Should successfully create a cluster with machine pool machines
- Machine Remediation Spec Should successfully trigger KCP remediation
- Machine Remediation Spec Should successfully trigger machine deployment remediation
- Multitenancy test should create cluster with nested assumed role
Feel free to add/remove as you see fit.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.