helm-charts
helm-charts copied to clipboard
[prometheus] kube_deployment_status_replicas not sending the true value in node field.
Describe the bug a clear and concise description of what the bug is.
Hello everyone!
I'm using prometheus and grafana to visualize metrics.
I have the next query:
kube_deployment_status_replicas{namespace="$namespace", node=~"$Nodo"}
And the result in this field of response is always the same:
results.A.frames.*.schema.fields.1.labels.node:
is always the same node for every deployment. If I use kubectl with the wide option, it shows me different nodes for the deployments.
NOTE: I install Prometheus and Grafana using FluxCD. This is my HelmRelease file:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: prometheus
namespace: prometheus
spec:
interval: 5m
chart:
spec:
chart: prometheus
version: "15.12.0"
sourceRef:
kind: HelmRepository
name: prometheus-community
namespace: flux-system
interval: 1m
values:
alertmanager:
persistentVolume:
storageClass: "gp2"
server:
persistentVolume:
storageClass: "gp2"
service:
type: NodePort
extraArgs:
storage.local.retention: 720h
What's your helm version?
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}
What's your kubectl version?
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.14-eks-18ef993", GitCommit:"ac73613dfd25370c18cbbbc6bfc65449397b35c7", GitTreeState:"clean", BuildDate:"2022-07-06T18:06:50Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Which chart?
prometheus
What's the chart version?
"15.12.0"
What happened?
The label results.A.frames.*.schema.fields.1.labels.node
must show the true value of node. Always is frozen in the same value for every deployment/pod.
The Query is about kube_deployment_status_replicas.
From the dependency:
kube-state-metrics
The version of this dependency is "4.13.*", and this is set from the prometheus version 15.12.0
What you expected to happen?
Every query about the metrics kube_deployment_status_replicas
must send the real value from the node associated with the deployment.
How to reproduce it?
Just install the chart with the version and try the query to get the values from the pods.
Enter the changed values of values.yaml?
alertmanager:
persistentVolume:
storageClass: "gp2"
server:
persistentVolume:
storageClass: "gp2"
service:
type: NodePort
extraArgs:
storage.local.retention: 720h
Enter the command that you execute and failing/misfunctioning.
Setting the variables namespace and Node inside a grafana dashboard. Make a panel with this query:
kube_deployment_status_replicas{namespace="$namespace", node=~"$Node"}
And try to change the dropdown option to Node, the output fails associated with node information.
Anything else we need to know?
I install the chart using FluxCD. This is the version:
- flux: v0.30.2
- helm-controller: v0.21.0
- kustomize-controller: v0.25.0
- notification-controller: v0.23.5
- source-controller: v0.24.4
The helm controller at the same time use helm v3.8.2
Every query about the metrics kube_deployment_status_replicas must send the real value from the node associated with the deployment.
Kube-state-metrics exposes two labels in kube_deployment_status_replicas
: namespace
and deployment
. Any other labels present in the corresponding time series have been attached, also via relabelling. I am sure there is a label referencing a host, e.g. instance
, but this denotes the host of the Prometheus target's endpoint (__meta_kubernetes_endpoint_node_name
) - it is not referencing a node related to the named deployment. This is why it is always the same (with a single instance of kube-state-metrics).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.