kube-prometheus
kube-prometheus copied to clipboard
Kubernetes / Proxy dashboard is empty
What did you do? I've deployed prometheus operator, prometheus, grafana and alertmanager using kube-prometheus. I noticed that the "Kubernetes / Proxy" dashboard seemed to be empty
Did you expect to see some different?
I didn't expect an empty dashboard. The metric for the number of instances up shown in the graph was sum(up{job="kube-proxy"})
. Upon debugging I saw that there was no ServiceMonitor in the generated manifests that had jobLabel set to something that had the value of kube-proxy.
I also noticed that the dropdown in the prometheus ui didn't have any metric with the prefix kubeproxy
Is this expected or is this something that is broken?
Environment
-
Prometheus Operator version:
v0.33.0
-
Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
-
Kubernetes cluster kind:
kubeadm
Anything else we need to know?: I noticed that the kube-proxy metrics are available on port 10249 with the metric keys present in this dashboard(for eg., kubeproxy_sync_proxy_rules_duration_seconds_count). But they are available only from localhost and not remotely. kube-proxy runs with hostNetwork set to true because of which I was able to ssh into the node and make a curl call on localhost and port 10249.
You need to ensure that your kube-proxy is actually listening on some address other than localhost so Prometheus can actually scrape it :)
If just scraping(i.e., n/w reachability) was the issue, then the correct ServiceMonitor manifest should've been generated and the prometheus targets page should show kube-proxy job as DOWN. Anyways, I've updated kube-proxy configmap to change the metrics bind address and it is now reachable remotely as well.
I see that from the jsonnet source files, the service monitor manifests are generated with the following names and defined in the following places: prometheus-serviceMonitor.yaml - prometheus.libsonnet#L207 prometheus-serviceMonitorApiserver.yaml - prometheus.libsonnet# prometheus-serviceMonitorCoreDNS - prometheus.libsonnet#425 prometheus-serviceMonitorEtcd - kube-prometheus-static-etcd.libsonnet#47 prometheus-serviceMonitorKubeControllerManager - prometheus.libsonnet#332 prometheus-serviceMonitorKubelet - prometheus.libsonnet#263 prometheus-serviceMonitorKubeScheduler - prometheus.libsonnet#232
I was not able to find where the servicemonitor object is being defined for kube-proxy in this repo. Can you please point me out the same.
It seems you are right. I don't think we have a ServiceMonitor for kube-proxy anywhere. It seems that kind and minikube both run it as a pod, so it would be perfectly legitimate to create a PodMonitor by default collecting the metrics from kube-proxy processes. Do you want to shoot us a PR adding this? :)
It seems you are right. I don't think we have a ServiceMonitor for kube-proxy anywhere. It seems that kind and minikube both run it as a pod, so it would be perfectly legitimate to create a PodMonitor by default collecting the metrics from kube-proxy processes. Do you want to shoot us a PR adding this? :)
Sure, will raise a PR in a couple of days. But I have a couple of questions before that. By default, kubeadm will configure kube-proxy to listen on 127.0.0.1 for metrics. Because of this prometheus would not be able to scrape these metrics. This would have to be changed to 0.0.0.0 in one of the following two places:
- Before cluster initialization, the config file passed to kubeadm init should have KubeProxyConfiguration manifest with the field metricsBindAddress set to 0.0.0.0:10249
- If the k8s cluster is already up and running, we'll have to modify the configmap kube-proxy in the namespace kube-system and set the metricsBindAddress field. After this kube-proxy daemonset would have to be restarted(at least in k8s v1.15.0 which is where I tested it)
kubectl -n kube-system rollout restart daemonset kube-proxy
Will adding this info in the docs be sufficient in addition to the PR?
Yes adding those docs would be fantastic! Could you also make sure to adapt the minikube command we provide to do this? (we already do the same for controller-manager and scheduler, so should be pretty straight forward)
Looking forward to the PR! :slightly_smiling_face:
Sure. I will try to get the PR by next week.
Hey @lyveng do you think you can make the PR ? thanks
@jfassad I wanted to pick it up. But haven't got time to get it done. Feel free to pick it up if you want to. If not I'll try to get it done when I get time.
prometheus can't find kube-proxy indicator,and then I added serviceMonitor for kube-proxy