Include `reason` label to the `kube_deployment_status_condition` metric
What would you like to be added:
I would like the label reason to be added to the kube_deployment_status_condition metric.
Why is this needed:
The kube_deployment_status_condition has only two properties relating to the DeploymentCondition object: condition and status, which is not enough to distinguish between a Progressing Deployment and a Complete Deployment. The reason label will add sufficient information to distinguish the two states.
Describe the solution you'd like
Simply extract and emit the reason label for the deployments.
Additional context
.
However, it's a stable metric stated in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/deployment-metrics.md
Any ideas for this case? Should we introduce a new metric? cc @dgrisonnet
Adding a new label is not a breaking change, so as long as its values are bounded, I would be fine with adding it.
k/k stability framework forbids that, but for kube-state-metrics I would allow it since new fields can be added to existing APIs without having to create a new version.
Got it.
Created https://github.com/kubernetes/kube-state-metrics/issues/1995 and https://github.com/kubernetes/kube-state-metrics/pull/1996 to discuss definitions of stable metrics in kube-state-metrics repo.
Adding a new label is not a breaking change, so as long as its values are bounded, I would be fine with adding it.
It depends on your ingestion. Since people ingest metrics into stores other than prometheus, I would argue that the lowest common denominator dictates that it is a breaking change.
/triage accepted
TLDR: we can add new labels for KSM stable metric.
cc @augustfengd Feel free to add a new label if you're available.
Anyone working on this? If none i'd like to give it a shot :) /cc @dgrisonnet @CatherineF-dev
Anyone working on this? If not, then I'd like to contribute for this issue!
Since there is no update for a long time, I created PR (https://github.com/kubernetes/kube-state-metrics/pull/2146).
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted