kube-state-metrics
kube-state-metrics copied to clipboard
Kube Node Status NotReady detection
What would you like to be added: Currently the node state metrics is lagging a possibility to detect if a node has become notReady for any specific reasons. I would therefore like to request creation of a metric like node ready seconds for example or last status chang in order to be able to detect such situations.
This issue is currently awaiting triage.
If kube-state-metrics contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@dgrisonnet @logicalhan
I would be happy to submit a patch to support your use case.
However, I noticed we already have kube_node_status_condition with labels for condition and status. Doesn't that already solve your problem?
Perhaps querying for something like kube_node_status_condition{condition="Ready", status!="true"}
https://github.com/kubernetes/kube-state-metrics/blob/main/docs/metrics/cluster/node-metrics.md
@ricardoapl this only provides the status for the specific point in time when you scraped it. However, what if you scrape every 30 seconds, and within that interval, the node becomes NotReady for 10 seconds? You would miss that status change.
From the idea it could be comparable with the kube_pod_container_status_last_terminated_reason metric.
I don't think we can get that information from the NodeStatus today: https://github.com/kubernetes/api/blob/v0.30.1/core/v1/types.go#L5871-L5936
Also if you miss the status it most likely means that it auto resolved in less than 30 seconds, so I am not sure how useful would be the information.
@dgrisonnet I faced an issue with some nodes that switched to NotReady state which caused issues for some pods that I cannot recall anymore. Unfortunately the status change of the metric was not recorded by any metric. Due to that I have created an alert on log entries which is making us aware nowadays.
I had a conversation with one of the maintainers during KubeCon Paris which was also of the opinion that this metric is missing. I cannot recall his name unfortunately.
However if the API does not provide any way to obtain this data things will become complicated indeed.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign @CatherineF-dev /triage accepted