cluster-proportional-autoscaler
cluster-proportional-autoscaler copied to clipboard
Request for Inclusion of Compatibility Matrix in README.md
Issue Description:
I would like to request the inclusion of a compatibility matrix in the README.md file of the cluster-proportional-autoscaler
repository. The compatibility matrix would provide users with a clear overview of the Kubernetes versions supported by the tool.
A compatibility matrix is essential for users to quickly determine whether the cluster-proportional-autoscaler
is compatible with their specific Kubernetes version, thus ensuring a smooth integration and usage experience.
An excellent example of a compatibility matrix can be found in the metrics-server
repository's README.md file, specifically at this location: https://github.com/kubernetes-sigs/metrics-server#compatibility-matrix. This matrix serves as a useful reference for users to understand the supported Kubernetes versions by the metrics-server tool.
By including a compatibility matrix in the README.md file of the cluster-proportional-autoscaler
repository, users would greatly benefit from the clear and concise information regarding the supported Kubernetes versions.
Thank you for considering this feature request. Please let me know if any additional information is required.
Had question around compatibility as well. Looking at the latest commit at the time of writing this comment: https://github.com/kubernetes-sigs/cluster-proportional-autoscaler/commit/a24cff63b0d93bf627c607bc0fe6ead68e15ce28
The code seems to have direct dependency on https://github.com/kubernetes-sigs/cluster-proportional-autoscaler/blob/a24cff63b0d93bf627c607bc0fe6ead68e15ce28/go.mod#L9-L13
A v0 version does not make any stability guarantees, so nearly all projects should start with v0 as they refine their public API.
https://go.dev/blog/publishing-go-modules
That being said,
k8s.io/api v0.26.3
Branches track Kubernetes branches and are compatible with that repo.
https://github.com/kubernetes/api/tree/v0.26.3#compatibility
That is to say,
it supports Kubernetes v1.26 version.
This is the last commit on v0.26.3 at the time of writing this comment. https://github.com/kubernetes/api/commit/699cbbc336e23257e05cd1e5c97343e64ec4ccb3
k8s.io/apimachinery v0.26.3
There are NO compatibility guarantees for this repository. It is in direct support of Kubernetes, so branches will track Kubernetes and be compatible with that repo. As we more cleanly separate the layers, we will review the compatibility guarantee.
https://github.com/kubernetes/apimachinery/tree/v0.26.3
https://github.com/kubernetes/apimachinery/commit/53ecdf01b997ca93c7db7615dfe7b27ad8391983
k8s.io/client-go v0.26.3
For each v1.x.y Kubernetes release, the major version (first digit) would remain 0.
https://github.com/kubernetes/client-go/tree/v0.26.3#versioning
Basically, v1.26 version of Kubernetes
https://github.com/kubernetes/client-go/commit/8cbca742aebe24b24f7f4e32fd999942fa9133e8
k8s.io/component-base v0.26.3
There are NO compatibility guarantees for this repository, yet. It is in direct support of Kubernetes, so branches will track Kubernetes and be compatible with that repo. As we more cleanly separate the layers, we will review the compatibility guarantee. We have a goal to make this easier to use in the future.
https://github.com/kubernetes/component-base/tree/v0.26.3
https://github.com/kubernetes/component-base/commit/1056e8d0d5a644c29c9a0aafbc472d053645fba3
k8s.io/utils v0.0.0-20230313181309-38a27ef9d749
No releases (or tags yet).
As a basic rule of thumb, it seems like when the go.mod k8s.io
import point to version v0.X.Y
, it supports a Kuberentes version 1.X
. e.g., if the library version is v0.26.3
, it supports Kubernetes version v1.26
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale