cluster-api
cluster-api copied to clipboard
✨ Implement MachineDeployment rolloutAfter support
What this PR does / why we need it:
If the reconciliation time is after spec.rolloutAfter then a rollout should happen or has already happened. A new MachineSet will be created at the first time the reconciliation time is after spec.rolloutAfter. Otherwise the oldest with creation timestamp > lastRolloutAfter annotation is picked. If a new MachineSet is required due to reconciliation time > spec.rolloutAfter the rolloutAfter time is added for creating the hash of the MachineSet name. When a new MachineSet is created the name does not clash with the existing MachineSet having the same template and the rollout can be orchestrated as usual.
Co-authored-by: Enxebre [email protected]
Compared to the previous PR at #4596 I did the following changes:
- Refactored the table tests and tried to catch all cases
- Adjusted the
generateMachineSetNamefunc to not append another hash to the name, because this would extend the machine object name which could cause other unexpected issues for providers / machines due to the extended length. Instead I decided to recalculate the hash using the same information plus the rolloutAfter value. - The current value of
MachineDeployment.Spec.RolloutAftergets now added to the MachineSet when it is getting created. By that the sorting algorithm helps to return the MachineSet by using the following sort criteria:- New:
> lastRolloutAnnotation < creationTimestamp< Name
- New:
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #4536
Additional information
-
Current sorting algorithm:
< creationTimestamp< Name
-
Table to determine all kind of cases (I hope this does not cause more confusion than not having this info, it did help to find the correct implementation):
# Case Equal(MS,TPL) MD.RolloutAfter < now MD.RolloutAfter vs MS.CreationTimestamp Result 1 A no < (irrelevant) < (irrelevant) create 2 A no < (irrelevant) > (irrelevant) create 3 A no > (irrelevant) < (irrelevant) create 4 A no > (irrelevant) > (irrelevant) create 5 B yes < < create 6 C yes < > no-op 7 D yes > < (irrelevant) no-op 8 D yes > > (irrelevant) no-op Reduced table by Case:
Case Equal(MS, TPL) MD.RolloutAfter vs now MD.RolloutAfter vs MS.CreationTimestamp Return Value A false - - nil / Create B true < < nil / Create C true < > MS / no-op D true > - MS / no-op Case description:
- A: Create new MachineSet because there is no existing having an equivalent template
- B: Create new MachineSet having the same template due to RolloutAfter
- C: Keep old MachineSet which has an equal MachineTemplate because RolloutAfter was already done
- D: Keep old MachineSet which has an equal MachineTemplate because RolloutAfter should be done in the future
@chrischdi: This issue is currently awaiting triage.
If CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign neolit123 for approval by writing /assign @neolit123 in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@chrischdi: The following test failed, say
/retestto rerun all failed tests or/retest-requiredto rerun all mandatory failed tests:Test name Commit Details Required Rerun command pull-cluster-api-verify-main f4f2735 link true
/test pull-cluster-api-verify-mainFull PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Have to take a look at this :-)
@vincepri @enxebre Given how long we spent on the previous PR, would be good to get a first opinion from your side.
/test help
@chrischdi: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:
/test pull-cluster-api-build-main/test pull-cluster-api-e2e-main/test pull-cluster-api-test-main/test pull-cluster-api-test-mink8s-main/test pull-cluster-api-verify-main
The following commands are available to trigger optional jobs:
/test pull-cluster-api-apidiff-main/test pull-cluster-api-e2e-full-main/test pull-cluster-api-e2e-informing-ipv6-main/test pull-cluster-api-e2e-informing-main/test pull-cluster-api-e2e-workload-upgrade-1-25-latest-main
Use /test all to run the following jobs that were automatically triggered:
pull-cluster-api-apidiff-mainpull-cluster-api-build-mainpull-cluster-api-e2e-informing-ipv6-mainpull-cluster-api-e2e-informing-mainpull-cluster-api-e2e-mainpull-cluster-api-test-mainpull-cluster-api-test-mink8s-mainpull-cluster-api-verify-main
In response to this:
/test help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/test pull-cluster-api-apidiff-main /test pull-cluster-api-e2e-full-main /test pull-cluster-api-e2e-informing-ipv6-main /test pull-cluster-api-e2e-informing-main /test pull-cluster-api-e2e-workload-upgrade-1-25-latest-main
From a quick glance, the current changes make sense to me, although these changes touch on the hashing code that @fabriziopandini was looking at for in place propagation of labels and annotations
From a quick glance, the current changes make sense to me, although these changes touch on the hashing code that @fabriziopandini was looking at for in place propagation of labels and annotations
Fair 👍 so better hold this and adapt depending on what in place propagation may change.
From a quick glance, the current changes make sense to me, although these changes touch on the hashing code that @fabriziopandini was looking at for in place propagation of labels and annotations
Fair +1 so better hold this and adapt depending on what in place propagation may change.
Yup. +/- ideally consider what we want to do in this PR during implementation of in-place mutation so it fits nicely.
@chrischdi: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@chrischdi: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-cluster-api-test-main | 50ece3b6644e69d78f1d336e091f3e1e5a358c01 | link | true | /test pull-cluster-api-test-main |
| pull-cluster-api-test-mink8s-main | 50ece3b6644e69d78f1d336e091f3e1e5a358c01 | link | true | /test pull-cluster-api-test-mink8s-main |
| pull-cluster-api-e2e-main | 50ece3b6644e69d78f1d336e091f3e1e5a358c01 | link | true | /test pull-cluster-api-e2e-main |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
This is gonna be replaced by #7053 so closing in favor of it.