Add custom dns support.
Over the past year, I've observed many users requiring specific DNS configurations rather than blindly defaulting to local DNS settings:
- https://github.com/prometheus/blackbox_exporter/issues/1356
- https://github.com/prometheus/blackbox_exporter/issues/1275
- https://github.com/prometheus/blackbox_exporter/issues/1235
- https://github.com/prometheus/blackbox_exporter/issues/960
I observe that Blackbox Exporter supports Helm chart deployment and provides container images, which enables flexible deployment in large-scale Kubernetes clusters. Real-world network probing requirements are often complex. We want certain pods to maintain the ability to resolve targets through CoreDNS, while other pods should resolve public DNS addresses or DNS servers on jump hosts. Based on these enterprise-level realities, we've made some modifications to the source code.
Assuming we want the following configuration (either manually written or Helm-generated):
- file:
/etc/blackbox.yaml - file:
metrics/data/{cluster-name}/blackbox.jumpserver.yaml - file:
metrics/data/{cluster-name}/blackbox.worker.yaml - file:
metrics/data/{cluster-name}/blackbox.master.yaml
We have implemented the following configuration (sanitized and simplified for demonstration):
modules:
http_get_2xx:
prober: http
http:
dns_server: 10.96.0.10:53
dns_timeout: 10s
tcp_connect:
prober: tcp
tcp:
dns_server: 10.96.0.10:53
dns_timeout: 10s
grpc:
prober: grpc
grpc:
dns_server: 10.96.0.10:53
dns_timeout: 10s
This PR has been running stably in Kubernetes clusters spanning hundreds to thousands of physical machines and has operated continuously for over a year. The code and operational experience have undergone rigorous production-level validation.
I hope this will be helpful to others. @electron0zero , @anionDev , @RorFis , @darioef , @snaar