kube-prometheus
kube-prometheus copied to clipboard
Need pod labels in kube-state-metrics
What did you do?
I have a similar issue as mentioned in https://github.com/kubernetes/kube-state-metrics/issues/536 and https://github.com/prometheus-operator/prometheus-operator/issues/3460 I need all the pod labels to be present in the metrics , so that based on the label i could have the alert routing configured
We have the standrad set of kubenertes-state-metrics rules deployed and i don't want to modify them to add join mertrics . since this is a standard set of rules that we pull from community(https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) and modifying this would add maintenance overhead every time we update.
so is there a way to get all the pod labels without modifying the rules ?
- Prometheus Operator version:
prometheus-operator:
Image: quay.io/coreos/prometheus-operator:v0.38.1
`Insert image tag or Git SHA here`
<!-- Try: kubectl -n monitoring describe deployment prometheus-operator -->
-
Kubernetes version information:
kubectl version
Client Version: v1.19.3 Server Version: v1.15.12-eks-31566f
Hello 👋 when you mean pod labels, do you mean the kube_pod_labels metrics or labels on other pod metrics?
Hello , I mean the labels on the pod
What i am trying to achieve is to have generic implementation for an alertmanager route for my kubernetes workloads
Example: I have a label on every pod/service/rs "owner=<team_name>" , with which i want have an alert manager route option to route alerts based on team_name . with kube-state-metrics , when the KubePodCrashing alert fires up , i want to fire notification to the corresponding team based on this label , to achieve this i need the pod/service labels. the alert Rules are standard rules that we borrow from community and i don't want to modify these to rules to add join metrics on kube_pod_labels or kube_service_labels (https://github.com/kubernetes/kube-state-metrics/issues/536#issuecomment-420203467). is there any way in which i could pass a label extracted from kube_<workload>_labels to other metrics ?
Seems like something that could be done with kube-state-metrics 2.0 and applied as a jsonnet patch/addon to the project. Since this is a pretty frequent request I am adding a feature label.
@paulfantom , would like to know is this feature included in kube-state-metrics 2.0 ? i explored the docs and couldn't figure out a way, can you let me know if i can achieve this ?
I believe it can be done with a group_right kube_pod_labels but we do need an addon to do this as manually replace these rules is pretty tiring.
Yes, exactly what @LeoQuote suggests. We need a jsonnet addon.
I just randomly came across this issue looking for a solution to a similar problem. Aren't you looking for this? https://github.com/kubernetes/kube-state-metrics/issues/1489
or for the helm chart you mentioned: https://github.com/prometheus-community/helm-charts/issues/1235#issuecomment-895167954
Is there a resolution of the issue with missing "custom pod labels"(coming from the pod itself) inside "kube_pod_labels" metric?
When I did use the following config:
kube-state-metrics:
prometheus:
monitor:
metricRelabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
replacement: pod_label_$1
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
replacement: pod_label_$1
I have got only the following additional labels(custom specified pod labels are still missing):
"metric": {
"__name__": "kube_pod_labels",
"container": "kube-state-metrics",
"endpoint": "http",
"instance": "100.96.0.140:8080",
"job": "kube-state-metrics",
"namespace": "default",
"pod": "webapp-deployment-cf4f7dfc7-nl469",
"pod_label_app_kubernetes_io_component": "metrics",
"pod_label_app_kubernetes_io_instance": "prometheus",
"pod_label_app_kubernetes_io_managed_by": "Helm",
"pod_label_app_kubernetes_io_name": "kube-state-metrics",
"pod_label_app_kubernetes_io_part_of": "kube-state-metrics",
"pod_label_app_kubernetes_io_version": "2.3.0",
"pod_label_helm_sh_chart": "kube-state-metrics-4.4.3",
"pod_label_pod_template_hash": "57c988498f",
"pod_label_release": "prometheus",
"service": "prometheus-kube-state-metrics",
"uid": "db48a0a8-ddf9-4bd8-aad3-ec070ada16c5"
},
Glad i've found this issue as it pretty much confirms i wasn't going crazy when i realised that the current chart doesn't support custom pod labels in metrics to enable routing based on custom labels. the only way to do this currently is to disable all of the default rules in the kube-prom-stack chart, and re-write them manually. It would be great if the chart supported this out of the box.
Found this issue and the following worked for me
kube-state-metrics:
extraArgs:
- --metric-labels-allowlist=pods=[*]
This case really needs to be included in the examples. For anyone else going down this rabbit hole:
- Create a new file
utils.libsonnetalongside your main*promstack.jsonnetfile:
{
addArgs(args, name, containers): std.map(
function(c)
if c.name == name then
c {
args+: args,
}
else c,
containers,
),
}
Now in the main jsonnet file add an import at the top:
local addArgs = (import './utils.libsonnet').addArgs;
Finally, below the values section:
kubeStateMetrics+:: {
deployment+: {
spec+: {
template+: {
spec+: {
containers: addArgs(['--metric-labels-allowlist=pods=[*]'], 'kube-state-metrics', super.containers),
},
},
},
},
},
There's probably a way to import that addArgs from somewhere but I haven't figured it out. This works
Hi @kmurthyms Did you get something working without rewriting all the builtins alerts since you post this issue ?
I have a workaround for workload grouping I added this spec in kube-state-metrics servicemonitor
- action: replace
regex: ([a-z-]+[0-9]{0,1}?)-([a-z0-9-]+)
replacement: $1
sourceLabels:
- pod
targetLabel: app
this regex takes the prefix of a Statefulset / Deployment / Daemonset it works for usecases where you want the deployment name, and have grouping based on workload
I have a workaround for workload grouping I added this spec in kube-state-metrics servicemonitor
- action: replace regex: ([a-z-]+[0-9]{0,1}?)-([a-z0-9-]+) replacement: $1 sourceLabels: - pod targetLabel: appthis regex takes the prefix of a Statefulset / Deployment / Daemonset it works for usecases where you want the deployment name, and have grouping based on workload
Keep in mind it will not work for any app that deployment has a number in it - i.e. k8s-operator.*
So is this feature already available or not yet ? I mean to use k8s pod labels in Prometheus/Grafana queries ?
Found this issue and the following worked for me
kube-state-metrics: extraArgs: - --metric-labels-allowlist=pods=[*]
But you still need to join all the metrics on kube_pod_labels, which means you'll need to update all the alerts still. Or am I missing something?
Found this issue and the following worked for me
kube-state-metrics: extraArgs: - --metric-labels-allowlist=pods=[*]But you still need to join all the metrics on
kube_pod_labels, which means you'll need to update all the alerts still. Or am I missing something?
i don't think you're missing anything, that's exactly what i had to do: https://github.com/prometheus-operator/kube-prometheus/issues/887#issuecomment-1310998619