blackbox_exporter icon indicating copy to clipboard operation
blackbox_exporter copied to clipboard

How do I get my blackbox exporter to allow scraping of all metrics?

Open b3y0ndd34th opened this issue 2 years ago • 1 comments

Host operating system: output of uname -a

Windows 10

blackbox_exporter version: output of blackbox_exporter --version

0.19.0

What is the blackbox.yml module config.

modules:
  http_2xx:
    prober: http
    timeout: 5s
    http:
      valid_http_versions: ["HTTP/1.1", "HTTP/2.0"]
      valid_status_codes: []  # Defaults to 2xx
      method: GET
      no_follow_redirects: false
      fail_if_ssl: false
      fail_if_not_ssl: false
      fail_if_body_matches_regexp:
        - "Could not connect to database"
      fail_if_body_not_matches_regexp:
        - "Download the latest version here"
      fail_if_header_matches: # Verifies that no cookies are set
        - header: Set-Cookie
          allow_missing: true
          regexp: '.*'
      fail_if_header_not_matches:
        - header: Access-Control-Allow-Origin
          regexp: '(\*|example\.com)'
      tls_config:
        insecure_skip_verify: false
      preferred_ip_protocol: "ip4" # defaults to "ip6"
      ip_protocol_fallback: false  # no fallback to "ip6"
  http_post_2xx:
    prober: http
    timeout: 5s
    http:
      method: POST
      headers:
        Content-Type: application/json
      body: '{}'
  http_gzip:
    prober: http
    http:
      method: GET
      compression: gzip
  http_gzip_with_accept_encoding:
    prober: http
    http:
      method: GET
      compression: gzip
      headers:
        Accept-Encoding: gzip
  tls_connect:
    prober: tcp
    timeout: 5s
    tcp:
      tls: true
  tcp_connect_example:
    prober: tcp
    timeout: 5s
  irc_banner_example:
    prober: tcp
    timeout: 5s
    tcp:
      query_response:
        - send: "NICK prober"
        - send: "USER prober prober prober :prober"
        - expect: "PING :([^ ]+)"
          send: "PONG ${1}"
        - expect: "^:[^ ]+ 001"
  icmp_example:
    prober: icmp
    timeout: 5s
    icmp:
      preferred_ip_protocol: "ip4"
      source_ip_address: "127.0.0.1"
  dns_udp_example:
    prober: dns
    timeout: 5s
    dns:
      query_name: "www.prometheus.io"
      query_type: "A"
      valid_rcodes:
      - NOERROR
      validate_answer_rrs:
        fail_if_matches_regexp:
        - ".*127.0.0.1"
        fail_if_all_match_regexp:
        - ".*127.0.0.1"
        fail_if_not_matches_regexp:
        - "www.prometheus.io.\t300\tIN\tA\t127.0.0.1"
        fail_if_none_matches_regexp:
        - "127.0.0.1"
      validate_authority_rrs:
        fail_if_matches_regexp:
        - ".*127.0.0.1"
      validate_additional_rrs:
        fail_if_matches_regexp:
        - ".*127.0.0.1"
  dns_soa:
    prober: dns
    dns:
      query_name: "prometheus.io"
      query_type: "SOA"
  dns_tcp_example:
    prober: dns
    dns:
      transport_protocol: "tcp"
      preferred_ip_protocol: "ip4" 

What is the prometheus.yml scrape config.

global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
           - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
#scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  #- job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

  #  static_configs:
  #    - targets: ["localhost:9090"]

scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # Careful, the scrape timeout has to be lower than the scrape interval.
    static_configs:
            - targets: ['localhost:9090', 'localhost:9182', 'localhost:9115', 'localhost:9117', 'localhost:7116']

What logging output did you get from adding &debug=true to the probe URL?

Nothing, the page just reloads

What did you do that produced an error?

I am trying to scrape data using probe_success, but I do not see it as a metric to use. I fear that there may be other metrics that I am missing too. Have I not enabled something?

What did you expect to see?

When I plug in probe_success in grafana or prometheus, I do not get any data back. It does not recognize probe_success as a metric like I expected it to.

What did you see instead?

No displayed table, data, or graph.

b3y0ndd34th avatar Dec 31 '21 19:12 b3y0ndd34th

The addresses listed as your targets are meant to return Prometheus metrics directly? Or these are all Blackbox exporter servers?

To get probe_success, you need to pass in a module and target query parameter

OneCricketeer avatar Apr 21 '22 14:04 OneCricketeer