kube-state-metrics
kube-state-metrics copied to clipboard
Does Custom Resource State Metrics support deep nested objects?
We have a k8s object as follows.
status:
resourceCount:
tenantA:
hostGroupA:
example.com/type1: 2
example.com/type2: 2
example.com/type3: 20
We would like to generate a metrics like resourceCount{tenant=tenantA, hostGroup=hostGroupA, resourceType=type2} 2.
Initially I tried to use labelFromKey, but it seems only supports one level of nesting. I have to statically specify path, and in the last level use labelFromKey to generate metrics by resourceType.
metrics:
- name: resourceCount
each:
type: Gauge
gauge:
path: [status, resourceCount, tenantA, hostGroupA]
labelFromKey: resourceType
Wondering whether we can use labelFromKey for nested objects.
This issue is currently awaiting triage.
If kube-state-metrics contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
If I understand correctly, you want the "parent" keys in the CR to be added as labels.
This seems interesting but hard to generalize. How would this work if there are multiple keys with different structures? And when faced with sub-trees containing arrays, how would it know what key to use as the label name and value, and which key to traverse for further values? It seems like you would need something like a JSONPath expression or a series of them, so you could express to k-s-m a generic pattern for traversing your CR to discover unique time-series and values for them.
Otherwise you won't be able to handle something like:
status:
resourceCount:
- name: tenantA:
hosts:
- hostname: hostGroupA
entries:
- type: example.com/type1:
count: 2
- type: example.com/type2
count: 2
- type: example.com/type3
count: 20
... which is the other common pattern for expressing these sorts of structures.
Thanks for response @ringerc. Yes I think we need something like JSONPath to extract arbitrary custom info from k8s object. A kubectl request may look like:
kubectl get <resource> -o=jsonpath='{range .status.resourceCount[*]}{range @[*]}{range @}["{.}", "{@}"]{"\n"}{end}{end}{end}'
If there is a general solution for this, kube-state-metrics will be much more powerful. But it is understandable this might be challenging. We end up with defining prometheus metrics directly inside source code. It feels easier than configuring kube-state-metrics to expose highly customized objects.
Duplicate of https://github.com/kubernetes/kube-state-metrics/issues/2368
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.