cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Requeue after EKS cluster creation request is sent / cluster transitioned to UPDATING
/kind refactor
Describe the solution you'd like Currently, CAPA does not requeue for EKS cluster creation / update, instead, it continues on reconciliation.
- Creation: It waits for cluster to be active (https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/main/pkg/cloud/services/eks/cluster.go#L58-L79)
- Update: It continues on reconciliation for example for cluster version update (https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/4274a5ab5c9dc391045840d6e45e17fe0cbab3dc/pkg/cloud/services/eks/cluster.go#L531)
CAPA should requeue (probably after a duration) after creation request is sent for creation / cluster transitioned to UPDATING for update.
Environment:
- Cluster-api-provider-aws version: v1.0.0
@richardchen-db: This issue is currently awaiting triage.
If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/area provider/eks /priority important-soon /help
@richardcase: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/area provider/eks /priority important-soon /help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
To give some extra detail/background:
- creation - the original implementation of EKS in CAPA made a lot of use of the
Waitfunctions in the AWS SDK to pause (i.e. wait) reconciliation until the EKS cluster reached the desired status. After discussions over a number of different issues/office hours we decided it would be better to requeue usingreconcile.Result{RequeueAfter:}so that we don't block the reconciliation loop. - update - I consider this a bug as we wait until the status of the EKS cluster is
UPDATINGand then continue on with the reconciling things like encryption, tags, oidc. Really, we should requeue and continue to requeue until the cluster changes from UPDATING to a new state (ACTIVE if successful).
I think we should split this into 2 separate issues as we have a refactor and a bug here.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/unassign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Deprioritize it with
/priority important-longtermor/priority backlog - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
From office hours 2023-04-03:
/triage accepted /help
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Deprioritize it with
/priority important-longtermor/priority backlog - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted