x509-certificate-exporter icon indicating copy to clipboard operation
x509-certificate-exporter copied to clipboard

data missing in grafana dashboard

Open rajeshkothinti opened this issue 3 years ago • 2 comments

hello,

I followed chart installation as per instructions keeping all default values provided but could not get data in dashboard and tried with modifying values.yaml for data to get scrape to grafana

tried with below content as well but no luck.

chart installation was done right but missing something to get data in dashboard. I was able to get kubernetes cluster data in another dashboard to monitor like cluster memory and cpu.

Please advise

helm upgrade x509-certificate-exporter enix/x509-certificate-exporter --values myvalues.yaml

hostPathsExporter: daemonSets: nodes: watchFiles: - /var/lib/kubelet/pki/kubelet-client-current.pem - /etc/kubernetes/pki/apiserver.crt - /etc/kubernetes/pki/apiserver-etcd-client.crt - /etc/kubernetes/pki/apiserver-kubelet-client.crt - /etc/kubernetes/pki/ca.crt - /etc/kubernetes/pki/front-proxy-ca.crt - /etc/kubernetes/pki/front-proxy-client.crt - /etc/kubernetes/pki/etcd/ca.crt - /etc/kubernetes/pki/etcd/healthcheck-client.crt - /etc/kubernetes/pki/etcd/peer.crt - /etc/kubernetes/pki/etcd/server.crt watchKubeconfFiles: - /etc/kubernetes/admin.conf

  Thanks

rajeshkothinti avatar Sep 11 '22 15:09 rajeshkothinti

Hi @kRajr

Given these values you must be a Prometheus operator user. It is likely the operator did not select the ServiceMonitor object installed by our Helm chart.

Troubleshooting could go this way:

  • Get access to the Prometheus web UI (kubectl port-forward, if not exposed in your cluster)
  • Go to Status/Targets and look for "x509"
  • If the x509-certificate-exporter is showing up "DOWN", then it could be a NetworkPolicy issue. You may have set-up network isolation of namespaces.
  • Or if it's missing then it has to do with the Prometheus operator
    • Check logs of the operator when you helm install the exporter (uninstall it first). It may explain why the ServiceMonitor is not accepted.
    • Inspect the YAML definition for your Prometheus custom resource. It must have serviceMonitorNamespaceSelector or serviceMonitorSelector parameters that prevent finding the ServiceMonitor installed by x509-certificate-exporter

I hope this helps a bit. Otherwise please provide a YAML export of your Prometheus object.

npdgm avatar Sep 12 '22 09:09 npdgm

Hi @npdgm

Thanks for troubleshooting steps got some insight. I redeployed chart with default values. I am using prometheus operator. Prometheus operator was deployed using Kube-prometheus helm chart. In prometheus CRD I see labels added only for servicemonitorselector as below

serviceMonitorNamespaceSelector: {} serviceMonitorSelector: matchLabels: release: kube-prometheus-stack

I added label release: kube-prometheus-stack in x509 servicemonitor selector filed to match for prometheus operator but did not help, nothing on prometheus operator pod logs as well. I redeployed our x509 certificate with default values.yml and added below label in x509 service monitor manually before deploying chart. no logs in tail with x509 in prometheus operator

selector: matchLabels: {{- include "x509-certificate-exporter.selectorLabels" . | nindent 6 }} release: kibe-prometheus-stack

much appreciated

rajeshkothinti avatar Sep 16 '22 17:09 rajeshkothinti

Hello mates. I am using this chart with the PodMonitor because of the same issue. It's related to the fact the Service in the chart is created as a headless, so ServiceMonitor is not able to scrape the service even when the selectors are well crafted. I tested this changes and worked perfectly but no time to open the issue fixing, so atm using PodMonitor (don't like it a lot, honestly, so I will try to open a PR ASAP)

achetronic avatar Jul 12 '23 15:07 achetronic

@achetronic thank you for taking the time to report issues. We really appreciate it.

I agree with your position on PodMonitors. By default a ServiceMonitor will offer greater compatibility with older prometheus-operator versions and is quite the standard.

The associated Service was made headless on purpose, as there is no point having an internal load-balancer in front of multiple exporters. Prometheus-operator does query Endpoints and should not need kube-proxy or a CNI to provision unneeded network configurations. I wonder why you and possibly other users may have an issue with the headless service. We've been deploying this chart on many Kubernetes distributions, CNI, and cloud providers, and never encountered this situation. If you can spare a little time, could you tell us about your environment and any detail you think may differ from most common clusters? For example:

  • k8s distribution, version, and cloud provider used if it's managed
  • how was prometheus-operator deployed, it's version, and was the kube-prometheus-stack chart used?
  • usage of NetworkPolicies or not
  • CNI used, and is kube-proxy employed

Anyhow I went to investigate a few charts from prometheus-community and they don't seem to use headless services. Even though I'm pretty sure other exporters do use headless Services, let's follow the same practice as prometheus-community. I will open a PR and make the Service a regular ClusterIP by default and move the headless option as a value flag. You can expect a release fairly soon as we have a few CI and build changes in the pipe.

npdgm avatar Jul 12 '23 16:07 npdgm

:tada: This issue has been resolved in version 3.8.0 :tada:

The release is available on GitHub release

Your semantic-release bot :package::rocket:

monkeynator avatar Jul 12 '23 17:07 monkeynator