kube-prometheus icon indicating copy to clipboard operation
kube-prometheus copied to clipboard

Need pod labels in kube-state-metrics

Open kmurthyms opened this issue 4 years ago • 17 comments

What did you do?

I have a similar issue as mentioned in https://github.com/kubernetes/kube-state-metrics/issues/536 and https://github.com/prometheus-operator/prometheus-operator/issues/3460 I need all the pod labels to be present in the metrics , so that based on the label i could have the alert routing configured

We have the standrad set of kubenertes-state-metrics rules deployed and i don't want to modify them to add join mertrics . since this is a standard set of rules that we pull from community(https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) and modifying this would add maintenance overhead every time we update.

so is there a way to get all the pod labels without modifying the rules ?

  • Prometheus Operator version:
prometheus-operator:
    Image:      quay.io/coreos/prometheus-operator:v0.38.1
`Insert image tag or Git SHA here`
<!-- Try: kubectl -n monitoring describe deployment prometheus-operator -->
  • Kubernetes version information:

    kubectl version

Client Version: v1.19.3 Server Version: v1.15.12-eks-31566f

kmurthyms avatar Jan 25 '21 08:01 kmurthyms

Hello 👋 when you mean pod labels, do you mean the kube_pod_labels metrics or labels on other pod metrics?

lilic avatar Jan 25 '21 09:01 lilic

Hello , I mean the labels on the pod What i am trying to achieve is to have generic implementation for an alertmanager route for my kubernetes workloads Example: I have a label on every pod/service/rs "owner=<team_name>" , with which i want have an alert manager route option to route alerts based on team_name . with kube-state-metrics , when the KubePodCrashing alert fires up , i want to fire notification to the corresponding team based on this label , to achieve this i need the pod/service labels. the alert Rules are standard rules that we borrow from community and i don't want to modify these to rules to add join metrics on kube_pod_labels or kube_service_labels (https://github.com/kubernetes/kube-state-metrics/issues/536#issuecomment-420203467). is there any way in which i could pass a label extracted from kube_<workload>_labels to other metrics ?

kmurthyms avatar Jan 25 '21 17:01 kmurthyms

Seems like something that could be done with kube-state-metrics 2.0 and applied as a jsonnet patch/addon to the project. Since this is a pretty frequent request I am adding a feature label.

paulfantom avatar Jan 28 '21 13:01 paulfantom

@paulfantom , would like to know is this feature included in kube-state-metrics 2.0 ? i explored the docs and couldn't figure out a way, can you let me know if i can achieve this ?

kmurthyms avatar Apr 14 '21 04:04 kmurthyms

I believe it can be done with a group_right kube_pod_labels but we do need an addon to do this as manually replace these rules is pretty tiring.

LeoQuote avatar Jul 21 '21 07:07 LeoQuote

Yes, exactly what @LeoQuote suggests. We need a jsonnet addon.

paulfantom avatar Jul 21 '21 07:07 paulfantom

I just randomly came across this issue looking for a solution to a similar problem. Aren't you looking for this? https://github.com/kubernetes/kube-state-metrics/issues/1489

or for the helm chart you mentioned: https://github.com/prometheus-community/helm-charts/issues/1235#issuecomment-895167954

dVerhees avatar Jan 17 '22 15:01 dVerhees

Is there a resolution of the issue with missing "custom pod labels"(coming from the pod itself) inside "kube_pod_labels" metric?

When I did use the following config:

kube-state-metrics:
  prometheus:
    monitor:
      metricRelabelings:
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
          replacement: pod_label_$1

      relabelings:
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
          replacement: pod_label_$1

I have got only the following additional labels(custom specified pod labels are still missing):

        "metric": {
          "__name__": "kube_pod_labels",
          "container": "kube-state-metrics",
          "endpoint": "http",
          "instance": "100.96.0.140:8080",
          "job": "kube-state-metrics",
          "namespace": "default",
          "pod": "webapp-deployment-cf4f7dfc7-nl469",
          "pod_label_app_kubernetes_io_component": "metrics",
          "pod_label_app_kubernetes_io_instance": "prometheus",
          "pod_label_app_kubernetes_io_managed_by": "Helm",
          "pod_label_app_kubernetes_io_name": "kube-state-metrics",
          "pod_label_app_kubernetes_io_part_of": "kube-state-metrics",
          "pod_label_app_kubernetes_io_version": "2.3.0",
          "pod_label_helm_sh_chart": "kube-state-metrics-4.4.3",
          "pod_label_pod_template_hash": "57c988498f",
          "pod_label_release": "prometheus",
          "service": "prometheus-kube-state-metrics",
          "uid": "db48a0a8-ddf9-4bd8-aad3-ec070ada16c5"
        },

kiril-dayradzhiev avatar Mar 21 '22 17:03 kiril-dayradzhiev

Glad i've found this issue as it pretty much confirms i wasn't going crazy when i realised that the current chart doesn't support custom pod labels in metrics to enable routing based on custom labels. the only way to do this currently is to disable all of the default rules in the kube-prom-stack chart, and re-write them manually. It would be great if the chart supported this out of the box.

gurpalw avatar Nov 10 '22 22:11 gurpalw

Found this issue and the following worked for me

kube-state-metrics:
          extraArgs:
          - --metric-labels-allowlist=pods=[*]

jcputter avatar Jan 30 '23 11:01 jcputter

This case really needs to be included in the examples. For anyone else going down this rabbit hole:

  1. Create a new file utils.libsonnet alongside your main *promstack.jsonnet file:
{
  addArgs(args, name, containers): std.map(
    function(c)
      if c.name == name then
        c {
          args+: args,
        }
      else c,
    containers,
  ),
}

Now in the main jsonnet file add an import at the top:

local addArgs = (import './utils.libsonnet').addArgs;

Finally, below the values section:

  kubeStateMetrics+:: {
    deployment+: {
      spec+: {
        template+: {
          spec+: {            
            containers: addArgs(['--metric-labels-allowlist=pods=[*]'], 'kube-state-metrics', super.containers),
          },
        },
      },
    },
  },

There's probably a way to import that addArgs from somewhere but I haven't figured it out. This works

lmyslinski avatar Mar 16 '23 09:03 lmyslinski

Hi @kmurthyms Did you get something working without rewriting all the builtins alerts since you post this issue ?

BrewToR avatar Mar 22 '23 11:03 BrewToR

I have a workaround for workload grouping I added this spec in kube-state-metrics servicemonitor

    - action: replace
      regex: ([a-z-]+[0-9]{0,1}?)-([a-z0-9-]+)
      replacement: $1
      sourceLabels:
      - pod
      targetLabel: app

this regex takes the prefix of a Statefulset / Deployment / Daemonset it works for usecases where you want the deployment name, and have grouping based on workload

segevmatuti1 avatar Jun 04 '23 09:06 segevmatuti1

I have a workaround for workload grouping I added this spec in kube-state-metrics servicemonitor

    - action: replace
      regex: ([a-z-]+[0-9]{0,1}?)-([a-z0-9-]+)
      replacement: $1
      sourceLabels:
      - pod
      targetLabel: app

this regex takes the prefix of a Statefulset / Deployment / Daemonset it works for usecases where you want the deployment name, and have grouping based on workload

Keep in mind it will not work for any app that deployment has a number in it - i.e. k8s-operator.*

hajdukda avatar Nov 15 '23 14:11 hajdukda

So is this feature already available or not yet ? I mean to use k8s pod labels in Prometheus/Grafana queries ?

vieenodp avatar Mar 04 '24 11:03 vieenodp

Found this issue and the following worked for me

kube-state-metrics:
          extraArgs:
          - --metric-labels-allowlist=pods=[*]

But you still need to join all the metrics on kube_pod_labels, which means you'll need to update all the alerts still. Or am I missing something?

zstlx avatar Mar 18 '24 15:03 zstlx

Found this issue and the following worked for me

kube-state-metrics:
          extraArgs:
          - --metric-labels-allowlist=pods=[*]

But you still need to join all the metrics on kube_pod_labels, which means you'll need to update all the alerts still. Or am I missing something?

i don't think you're missing anything, that's exactly what i had to do: https://github.com/prometheus-operator/kube-prometheus/issues/887#issuecomment-1310998619

gurpalw avatar Mar 18 '24 18:03 gurpalw