Add conditions to MachineDeployment Object
User Story As a developer/user/operator I would like to have conditions documenting the operational state of MachineDeployment objects.
Anything else you would like to add: As required by the condition CAEP, the MachineDeployment objects should also provide a Ready condition describing the overall state of the object.
Tasks:
- [x] Summary
ReadyConditionand Node availability conditionAvailableConditionhttps://github.com/kubernetes-sigs/cluster-api/pull/4625 - [x] MachineSet related conditions:
MachinesCreatedCondition,MachinesReadyConditionandResizedConditionhttps://github.com/kubernetes-sigs/cluster-api/pull/5056 - [ ] MachineDeployment scaling related conditions:
MachinesSpecUpToDateConditionandResizedCondition - [ ] Remove the use of Phases
- [ ] Cleanup: Change calculateStatus() --> updateStatus() to be consistent with the rest of the code base.
Related: Conditions for KCP
/kind feature
Ideally, we would have something similar to DeploymentProgressing.
This is more to bring in line with how kubectl rollout status works. With clusterctl rollout status my-md-0 we would also watch the progression of the MD by looking at MachinesReadyCondition. I think that would accomplish something similar.
/cc @fabriziopandini
/milestone v0.4.0
/milestone v0.4.0
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Can I be assigned this ticket? Not that anyone else is jumping to work on it :laughing:
I'll need this for the clusterctl rollout status command: https://github.com/kubernetes-sigs/cluster-api/issues/3439
@Arvinderpal Sure, feel free to take it!
/priority important-soon /assign @Arvinderpal
@vincepri @detiber I have the PR ready. PTAL https://github.com/kubernetes-sigs/cluster-api/pull/4174
/lifecycle active
/kind release-blocking
@fabriziopandini to check if the PR is ready to go and double check the release blocking status
/area api
/milestone v1.0
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
For MachineDeployments it'd be interesting to consider signal as conditions and differentiate rolling upgrades e.g
- Upgrading Version.
- Upgrading because of bootstrap provider change.
- Upgrading because of an infra change.
Thoughts?
/milestone v1.2
@enxebre More granular info would be great. Are you suggesting a new condition for each of those bullets? Would they be better captured in the Reason/Message of a single condition? I'm leaning towards the latter.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/lifecycle frozen
/triage accepted
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Deprioritize it with
/priority important-longtermor/priority backlog - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted
/remove-priority important-soon /priority backlog
/help
@fabriziopandini: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
cc @muraee
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted