cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Add new condition (EKSNodegroupUpdateSucceededCondition) to the AWSManagedMachinePool resource
What type of PR is this? /kind feature
What this PR does / why we need it:
This PR adds a new condition (EKSNodegroupUpdateSucceededCondition) to the AWSManagedMachinePool resource, as well as related reasons (Creating, Updating, FailedToCreate, FailedToUpdate). See issue link below as well as the discussion in this PR.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
fixes #2966
Special notes for your reviewer: I haven't found any places in the documentation that would need manual updating as a result of this change, but please let me know if I missed something. Thanks in advance for the review.
Checklist:
- [X] squashed commits
- [ ] includes documentation
- [ ] adds unit tests
- [ ] adds or updates e2e tests
Release note:
Add new condition (EKSNodegroupUpdateSucceededCondition) to the AWSManagedMachinePool resource.
Welcome @jon-fearer!
It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-aws 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/cluster-api-provider-aws has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @jon-fearer. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/ok-to-test
/retest
Creating and updating sounds more like reasons to explain operational status.
There is EKSNodegroupReadyCondition, Creating can be a reason to why NodeGroup is not ready yet.
And for update, a new condition like EKSNodegroupUpdateSucceededCondition with failure reasons updating, failedToUpdate make more sense to me.
I will defer to @richardcase.
EKSNodegroupReadyCondition = False with reason of Creating makes sense to me.
Would setting EKSNodegroupUpdateSucceededCondition to False (with reason Updating) during the update be misleading at all? It technically hasn't failed or succeeded at that point
Would setting
EKSNodegroupUpdateSucceededConditionto False (with reasonUpdating) during the update be misleading at all? It technically hasn't failed or succeeded at that point
Succeeded = false does not mean it failed, and explaining with a updating reason means that update is not succeeded due to a continuing update.
Examle from cluster api:
// DrainingSucceededCondition provide evidence of the status of the node drain operation which happens during the machine
// deletion process.
DrainingSucceededCondition ConditionType = "DrainingSucceeded"
// DrainingReason (Severity=Info) documents a machine node being drained.
DrainingReason = "Draining"
// DrainingFailedReason (Severity=Warning) documents a machine node drain operation failed.
DrainingFailedReason = "DrainingFailed"
Is this PR ready for another review?
No, should I move forward with the requested changes? On a related note, would it make sense to do something similar for the EKS control plane conditions/reasons?
A refactoring is needed there as well.
For example EKSControlPlaneUpdatingCondition becomes false when updating is completed, no way to understand what is wrong when the update is failed by looking at this. Instead, we could have a EKSControlPlaneUpdateSucceededCondition which will only show up if an update is started so that user will now an update is in progress and if the update is failed, the reason will also show up.
For these changes, we can define new conditions (like EKSControlPlaneUpdateSucceededCondition) and deprecate the redundant ones (like EKSControlPlaneUpdatingCondition) and eventually remove deprecated ones in 2.x release. But this needs a separate issue/PR.
A refactoring is needed there as well. For example
EKSControlPlaneUpdatingConditionbecomes false when updating is completed, no way to understand what is wrong when the update is failed by looking at this. Instead, we could have aEKSControlPlaneUpdateSucceededConditionwhich will only show up if an update is started so that user will now an update is in progress and if the update is failed, the reason will also show up.
I'm not sure about this to be honest. If i start an update to the control i think it makes sense to see EKSControlPlaneUpdatingCondition as true. If it the update fails then i see no problem with seeing EKSControlPlaneUpdatingCondition as false with a failed reason and message that contains the failure message.
Apologies for delay on my end. Still planning on updating this PR based on feedback by end of next week
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To complete the pull request process, please assign sedefsavas after the PR has been reviewed.
You can assign the PR to them by writing /assign @sedefsavas in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@sedefsavas This is ready for another look. Let me know your thoughts
@sedefsavas @richardcase I think this PR is ready for another review. Since I was not involved from the start, I dont have much context so just wanted to highlight this PR for review.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.