Consider adding version information to the Cluster status
User Story
As a user I would like to be able to access version information about my Cluster (topology) in the Cluster status.
Detailed Description
(this issue is based on the following conversation: https://github.com/kubernetes-sigs/cluster-api/pull/5292#discussion_r714000418)
We currently only have version status information distributed over multiple resources:
- ControlPlane: .status.version (only applies to control plane providers using version, which is mandatory for ClusterClass)
- MachineDeployments: .spec.template.spec.version in combination with the replica status fields (?)
- MachinePool: .spec.template.spec.version in combination with the replica status fields (?)
If there is demand, it might be good to add summarized version information to the Cluster.status.
The control plane version (if it exists) could be e.g. used as the source of truth for any consumer, e.g a Node that does not want to break the kube version skew policy (kubelet vs apiserver) (quoted from https://github.com/kubernetes-sigs/cluster-api/pull/5292#discussion_r714681305).
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
/area api /cc @enxebre
/milestone v1.0
Are we envisioning a different status for managed and unmanaged Clusters?
Are we envisioning a different status for managed and unmanaged Clusters?
Good question. I'm not sure, but if I'm not mistaken the version part of it can be the same for both as the other resources are "connected" the same way to the Cluster resource independent of managed/unmanaged.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale this could be relevant also for https://github.com/kubernetes-sigs/cluster-api/issues/5222#issuecomment-943422381 (option 2)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
/triage accepted /help
@fabriziopandini: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/triage accepted /help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
(doing some cleanup on old issues without updates) /close unfortunately, no one is picking up the task. the thread will remain available for future reference
@fabriziopandini: Closing this issue.
In response to this:
(doing some cleanup on old issues without updates) /close unfortunately, no one is picking up the task. the thread will remain available for future reference
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.