cluster-api
cluster-api copied to clipboard
Version validation for Minimum and/or Maximum version for CAPI objects
User Story
As a operator I would like to ensure that a user is not able to create/update a Cluster/KubeadmControlPlane/MachineDeployment/MachineSet/Machine in/to a Kubernetes Version which is not supported by the version of Cluster API currently in use for ensuring compatibility.
Detailed Description
The Cluster API project currently maintains a documentation which maps the version of Cluster API core providers to a list of Kubernetes Versions which are supported for workload clusters per component.
This issue proposes adding additional validation for the objects Cluster, KubeadmControlPlane, MachineDeployment, MachinePool, MachineSet and Machine to reject objects which violate the minimum and/or maximum supported version.
It may make sense to also add an annotation to implicitly skip/ignore this validation by a user similar to the annotation unsafe.topology.cluster.x-k8s.io/disable-update-class-name-check.
E.g.: unsafe.cluster.x-k8s.io/disable-minmax-version-check.
Anything else you would like to add:
There is already a validation for the supported minimum management cluster version in CAPI.
/kind feature
Propable duplicate of #4321 and #6614 ?
I have closed #4321, copying here a comment that can be relevant
This is tricky, because in some cases the kubernetes version could be defined on another resources vs the KubeadmConfig/KubeadmConfigTemplate (e.g. Machines, Machine deployments), and those resurces are generic - should not be bound to CABPK limitations.
We should carefully check where version is defined and ideally remove it from KubeadmConfig/KubeadmConfigTemplate this this value should be inherited from higher level abstractions.
I have closed #6614 , copying here a comment that can be relevant
I assume same for min version on create and for both workload/mgmt clusters (i.e. also includes clusterctl init ...).
How to address differences between management cluster and workload cluster is already partially addressed in clusterctl, but we should re-assess in the light of the big picture proposed by this issue
/triage accepted /help
@fabriziopandini: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/triage accepted /help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/remove-lifecycle stale
/priority important-longterm
I'm closing this because some users rely on the fact that CAPI might support older release out of official support, as well as support new ones not yet included in it.
However, I want to stress the fact that relying on something outside the support matrix is choice that every user must do taking into consideration that this can break anytime.
/close
@fabriziopandini: Closing this issue.
In response to this:
I'm closing this because some users rely on the fact that CAPI might support older release out of official support, as well as support new ones not yet included in it.
However, I want to stress the fact that relying on something outside the support matrix is choice that every user must do taking into consideration that this can break anytime.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.