external-dns
external-dns copied to clipboard
Export metrics to Prometheus
Hi! we have deployed on GKE cluster the kube-prometheus-stack and the last version of the External-DNS's Helm Chart and enable the servicemonitor but in prometheus, we can't find the metrics.
Our intend is to create this Grafana dashboard (https://grafana.com/grafana/dashboards/15038-external-dns/) but we can't find that metrics.
This is the config in values.yaml of this Chart.
serviceMonitor:
-- If true
, create a ServiceMonitor
resource to support the Prometheus Operator.
enabled: true
-- Additional labels for the ServiceMonitor
.
additionalLabels: {}
-- Annotations to add to the ServiceMonitor
.
annotations: {}
-- (string) If set create the ServiceMonitor
in an alternate namespace.
namespace:
-- (string) If set override the Prometheus default interval.
interval:
-- (string) If set override the Prometheus default scrape timeout.
scrapeTimeout:
-- (string) If set overrides the Prometheus default scheme.
scheme:
-- Configure the ServiceMonitor
TLS config.
tlsConfig: {}
-- (string) Provide a bearer token file for the ServiceMonitor
.
bearerTokenFile:
-- Relabel configs to apply to samples before ingestion.
relabelings: []
-- Metric relabel configs to apply to samples before ingestion.
metricRelabelings: []
-- Provide target labels for the ServiceMonitor
.
targetLabels: []
We must do anything else to see the metrics on prometheus?
Thanks a lot!!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
please add deploymentAnnotations: prometheus.io/scrape: 'true'
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.