x509-certificate-exporter
x509-certificate-exporter copied to clipboard
data missing in grafana dashboard
hello,
I followed chart installation as per instructions keeping all default values provided but could not get data in dashboard and tried with modifying values.yaml for data to get scrape to grafana
tried with below content as well but no luck.
chart installation was done right but missing something to get data in dashboard. I was able to get kubernetes cluster data in another dashboard to monitor like cluster memory and cpu.
Please advise
helm upgrade x509-certificate-exporter enix/x509-certificate-exporter --values myvalues.yaml
hostPathsExporter: daemonSets: nodes: watchFiles: - /var/lib/kubelet/pki/kubelet-client-current.pem - /etc/kubernetes/pki/apiserver.crt - /etc/kubernetes/pki/apiserver-etcd-client.crt - /etc/kubernetes/pki/apiserver-kubelet-client.crt - /etc/kubernetes/pki/ca.crt - /etc/kubernetes/pki/front-proxy-ca.crt - /etc/kubernetes/pki/front-proxy-client.crt - /etc/kubernetes/pki/etcd/ca.crt - /etc/kubernetes/pki/etcd/healthcheck-client.crt - /etc/kubernetes/pki/etcd/peer.crt - /etc/kubernetes/pki/etcd/server.crt watchKubeconfFiles: - /etc/kubernetes/admin.conf
Thanks
Hi @kRajr
Given these values you must be a Prometheus operator user. It is likely the operator did not select the ServiceMonitor object installed by our Helm chart.
Troubleshooting could go this way:
- Get access to the Prometheus web UI (
kubectl port-forward, if not exposed in your cluster) - Go to Status/Targets and look for "x509"
- If the x509-certificate-exporter is showing up "DOWN", then it could be a
NetworkPolicyissue. You may have set-up network isolation of namespaces. - Or if it's missing then it has to do with the Prometheus operator
- Check logs of the operator when you
helm installthe exporter (uninstall it first). It may explain why theServiceMonitoris not accepted. - Inspect the YAML definition for your
Prometheuscustom resource. It must haveserviceMonitorNamespaceSelectororserviceMonitorSelectorparameters that prevent finding theServiceMonitorinstalled by x509-certificate-exporter
- Check logs of the operator when you
I hope this helps a bit. Otherwise please provide a YAML export of your Prometheus object.
Hi @npdgm
Thanks for troubleshooting steps got some insight. I redeployed chart with default values. I am using prometheus operator. Prometheus operator was deployed using Kube-prometheus helm chart. In prometheus CRD I see labels added only for servicemonitorselector as below
serviceMonitorNamespaceSelector: {} serviceMonitorSelector: matchLabels: release: kube-prometheus-stack
I added label release: kube-prometheus-stack in x509 servicemonitor selector filed to match for prometheus operator but did not help, nothing on prometheus operator pod logs as well. I redeployed our x509 certificate with default values.yml and added below label in x509 service monitor manually before deploying chart. no logs in tail with x509 in prometheus operator
selector: matchLabels: {{- include "x509-certificate-exporter.selectorLabels" . | nindent 6 }} release: kibe-prometheus-stack
much appreciated
Hello mates. I am using this chart with the PodMonitor because of the same issue. It's related to the fact the Service in the chart is created as a headless, so ServiceMonitor is not able to scrape the service even when the selectors are well crafted. I tested this changes and worked perfectly but no time to open the issue fixing, so atm using PodMonitor (don't like it a lot, honestly, so I will try to open a PR ASAP)
@achetronic thank you for taking the time to report issues. We really appreciate it.
I agree with your position on PodMonitors. By default a ServiceMonitor will offer greater compatibility with older prometheus-operator versions and is quite the standard.
The associated Service was made headless on purpose, as there is no point having an internal load-balancer in front of multiple exporters. Prometheus-operator does query Endpoints and should not need kube-proxy or a CNI to provision unneeded network configurations. I wonder why you and possibly other users may have an issue with the headless service. We've been deploying this chart on many Kubernetes distributions, CNI, and cloud providers, and never encountered this situation. If you can spare a little time, could you tell us about your environment and any detail you think may differ from most common clusters? For example:
- k8s distribution, version, and cloud provider used if it's managed
- how was prometheus-operator deployed, it's version, and was the kube-prometheus-stack chart used?
- usage of NetworkPolicies or not
- CNI used, and is kube-proxy employed
Anyhow I went to investigate a few charts from prometheus-community and they don't seem to use headless services. Even though I'm pretty sure other exporters do use headless Services, let's follow the same practice as prometheus-community. I will open a PR and make the Service a regular ClusterIP by default and move the headless option as a value flag. You can expect a release fairly soon as we have a few CI and build changes in the pipe.
:tada: This issue has been resolved in version 3.8.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket: