kube-state-metrics
kube-state-metrics copied to clipboard
Support verticalpodautoscaler v1
What would you like to be added: Support verticalpodautoscalers/v1
Why is this needed: Since vertical pod autoscalers CRD has been updated from v1beta2 to v1, kube-state-metrics should support new version of API. (release note)
Describe the solution you'd like
Generate client codes from version v1
and modify the dependencies.
The legacy client dependency is v1beta2
, below is one of them I found.
https://github.com/kubernetes/kube-state-metrics/blob/41eea36f69efd9824836089aae67d308111f4e01/internal/store/builder.go#L38
Additional context
@fpetkovski, is this something to be looked into or would the quote below from the README negate this issue?
Resources in Kubernetes can evolve, i.e., the group version for a resource may change from alpha to beta and finally GA in different Kubernetes versions. For now, kube-state-metrics will only use the oldest API available in the latest release.
I'm not too familiar with the compatibility and versioning of KSM. The quote says "will only use the oldest API available in the latest release", what does this mean exactly?
Would this mean if autoscaling.k8s.io/v1beta2
is still available in Kubernetes 1.24 then KSM will keep using v1beta2
until this API version is removed in one of the next Kubernetes releases and then start using autoscaling.k8s.io/v1
?
KSM aims to have backwards compatibility with the latest 3 Kubernetes versions. However, VPA is an external CRD so I'm not sure if we should apply the same rules since the CRD is not linked to Kubernetes releases. I think it should be safe to switch to v1
and users that use the old API can upgrade VPA independently of Kubernetes.
@mrueg @dgrisonnet any thoughts from your side?
Adding VPA to kube-state-metrics was a mistake since it is out of its scope to only expose metrics about the core resources of Kubernetes. This leads to the problem we have here where we can't really move forward with an update since the API isn't tied to Kubernetes directly.
In my opinion, we should slowly deprecate it in favor of adding it as a configuration-based CRD. In the meantime, I think we shouldn't break the existing compatibility and stick to v1beta2.
As for the deprecation of VPA, I would recommend doing it over 2 minor releases of ksm:
v2.6.0:
- mark all VPA metrics as deprecated since v2.6.0 in order to notify our users
- provide a document guiding our users through the migration with an example of a CRD config that would provide the same functionalities as before
v2.8.0:
- remove the VPA resource completely in ksm
I think this makes sense. Now that we have a way to export metrics from CR fields, we can start deprecating the built-in VPA support.
Should we plan work on this for v3?
I think there are a couple of larger changes that we might want to include (e.g. Refactor command line arguments, remove specific metrics, rename others, etc.)
We could mark VPA as deprecated in 2.6 and advise users to use CRD metrics. I would say the removal of the feature should happen in v3 at the latest, but I am also fine with having it in a 2.x release in case someone has time to do the work.
I agree with Filip, we shouldn't necessarily wait for 3.x to remove the metrics since we have a way to deprecate them now.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
We decided to deprecate built-in VPA support in v2.7.0, please consider using Custom Resource State configurations instead. See https://github.com/kubernetes/kube-state-metrics/issues/1790 https://github.com/kubernetes/kube-state-metrics/pull/1835
Did anyone manage to create the CustomResourceState configuration that provides the same metrics? I'm new to Custom Resource State but creating the removed metrics seems like a not so easy topic
@mrueg wrote this doc https://github.com/kubernetes/kube-state-metrics/blob/main/docs/customresourcestate-metrics.md#verticalpodautoscaler to migrate VPA to CustomResourceState. It might be worth extending it to include all the VPA metrics that are going to be removed.
I'd love to do it but i'm not sure how to get the metrics coming from Kubernetes resource list like:
spec:
resourcePolicy:
containerPolicies:
- containerName: container
minAllowed: v1.ResourceList
The 2 issues I had were: 1 how do I get the containerName as a label if the path i use is: [spec, resourcePolicy, containerPolicies, minAllowed] Because it's not possible to use path: [spec, resourcePolicy, containerPolicies] and valueFrom: [minAllowed, cpu] as quantiles are not parsed successfully 2 thé current cpu value in vpa CR is in millicore and thé collector in ksm divised the value by 1000 to have it in cores. Is it possible with thé current custom resource collector or should we make this a millicore metric?
Quantile/Percentage support is in the main branch and will be shipped with v2.9.0: https://github.com/kubernetes/kube-state-metrics/pull/1989
I would suggest to open a new issue about the missing "complete replacement config for VPA in CRM" and continue the conversation there.
Added the request https://github.com/kubernetes/kube-state-metrics/issues/2041 :)