cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

Add new condition (EKSNodegroupUpdateSucceededCondition) to the AWSManagedMachinePool resource

Open jon-fearer opened this issue 3 years ago • 17 comments
trafficstars

What type of PR is this? /kind feature

What this PR does / why we need it: This PR adds a new condition (EKSNodegroupUpdateSucceededCondition) to the AWSManagedMachinePool resource, as well as related reasons (Creating, Updating, FailedToCreate, FailedToUpdate). See issue link below as well as the discussion in this PR.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): fixes #2966

Special notes for your reviewer: I haven't found any places in the documentation that would need manual updating as a result of this change, but please let me know if I missed something. Thanks in advance for the review.

Checklist:

  • [X] squashed commits
  • [ ] includes documentation
  • [ ] adds unit tests
  • [ ] adds or updates e2e tests

Release note:

Add new condition (EKSNodegroupUpdateSucceededCondition) to the AWSManagedMachinePool resource.

jon-fearer avatar Dec 13 '21 23:12 jon-fearer

Welcome @jon-fearer!

It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-aws 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-provider-aws has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. :smiley:

k8s-ci-robot avatar Dec 13 '21 23:12 k8s-ci-robot

Hi @jon-fearer. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 13 '21 23:12 k8s-ci-robot

/ok-to-test

Ankitasw avatar Dec 14 '21 04:12 Ankitasw

/retest

jon-fearer avatar Dec 14 '21 17:12 jon-fearer

Creating and updating sounds more like reasons to explain operational status. There is EKSNodegroupReadyCondition, Creating can be a reason to why NodeGroup is not ready yet.

And for update, a new condition like EKSNodegroupUpdateSucceededCondition with failure reasons updating, failedToUpdate make more sense to me.

I will defer to @richardcase.

sedefsavas avatar Dec 14 '21 22:12 sedefsavas

EKSNodegroupReadyCondition = False with reason of Creating makes sense to me.

Would setting EKSNodegroupUpdateSucceededCondition to False (with reason Updating) during the update be misleading at all? It technically hasn't failed or succeeded at that point

jon-fearer avatar Dec 14 '21 22:12 jon-fearer

Would setting EKSNodegroupUpdateSucceededCondition to False (with reason Updating) during the update be misleading at all? It technically hasn't failed or succeeded at that point

Succeeded = false does not mean it failed, and explaining with a updating reason means that update is not succeeded due to a continuing update.

Examle from cluster api:

// DrainingSucceededCondition provide evidence of the status of the node drain operation which happens during the machine
	// deletion process.
	DrainingSucceededCondition ConditionType = "DrainingSucceeded"

	// DrainingReason (Severity=Info) documents a machine node being drained.
	DrainingReason = "Draining"

	// DrainingFailedReason (Severity=Warning) documents a machine node drain operation failed.
	DrainingFailedReason = "DrainingFailed"

sedefsavas avatar Dec 14 '21 22:12 sedefsavas

Is this PR ready for another review?

sedefsavas avatar Jan 03 '22 19:01 sedefsavas

No, should I move forward with the requested changes? On a related note, would it make sense to do something similar for the EKS control plane conditions/reasons?

jon-fearer avatar Jan 03 '22 22:01 jon-fearer

A refactoring is needed there as well. For example EKSControlPlaneUpdatingCondition becomes false when updating is completed, no way to understand what is wrong when the update is failed by looking at this. Instead, we could have a EKSControlPlaneUpdateSucceededCondition which will only show up if an update is started so that user will now an update is in progress and if the update is failed, the reason will also show up.

For these changes, we can define new conditions (like EKSControlPlaneUpdateSucceededCondition) and deprecate the redundant ones (like EKSControlPlaneUpdatingCondition) and eventually remove deprecated ones in 2.x release. But this needs a separate issue/PR.

sedefsavas avatar Jan 03 '22 23:01 sedefsavas

A refactoring is needed there as well. For example EKSControlPlaneUpdatingCondition becomes false when updating is completed, no way to understand what is wrong when the update is failed by looking at this. Instead, we could have a EKSControlPlaneUpdateSucceededCondition which will only show up if an update is started so that user will now an update is in progress and if the update is failed, the reason will also show up.

I'm not sure about this to be honest. If i start an update to the control i think it makes sense to see EKSControlPlaneUpdatingCondition as true. If it the update fails then i see no problem with seeing EKSControlPlaneUpdatingCondition as false with a failed reason and message that contains the failure message.

richardcase avatar Jan 10 '22 08:01 richardcase

Apologies for delay on my end. Still planning on updating this PR based on feedback by end of next week

jon-fearer avatar Jan 28 '22 19:01 jon-fearer

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: To complete the pull request process, please assign sedefsavas after the PR has been reviewed. You can assign the PR to them by writing /assign @sedefsavas in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Feb 04 '22 21:02 k8s-ci-robot

@sedefsavas This is ready for another look. Let me know your thoughts

jon-fearer avatar Feb 05 '22 00:02 jon-fearer

@sedefsavas @richardcase I think this PR is ready for another review. Since I was not involved from the start, I dont have much context so just wanted to highlight this PR for review.

Ankitasw avatar Mar 03 '22 06:03 Ankitasw

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 10 '22 11:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 10 '22 07:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 09 '22 07:09 k8s-triage-robot

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 09 '22 07:09 k8s-ci-robot