blackbox_exporter icon indicating copy to clipboard operation
blackbox_exporter copied to clipboard

TCP prober is always returning success no matter witch port you use.

Open LuisMLGDev opened this issue 2 years ago • 7 comments

Hi guys, I've configured a TCP check like this:

  modules:
      tcp_connect:
          prober: tcp
          timeout: 5s

Basically tthe issue is I always getting a sussccess no matter which port uses or if the service is really up or not... module=tcp_connect target=testing.miservice.link:28666 level=debug msg="Successfully dialed" module=tcp_connect target=testing.miservice.link:28555 level=debug msg="Successfully dialed"

Obviously, you need the proper port to get a successful connection. When I try with telnet ... telnet connect or not depends if the port is the proper one.

The issue was tested in 0.18 and 0.19 versions and BlackBox exporter was deployed in an EKS cluster. The same blackbox contains a configuration for HTTP_2xx and all these endpoints (http prober) are working fine.

Scrape config:

    - job_name: 'blackbox_tcp'
      metrics_path: /probe
      params:
        module: [tcp_connect]
      static_configs:
        - targets:
          - testing.miservice.link:28555
          - testing.miservice.link:28666
      relabel_configs:
        - source_labels: [__address__]
          target_label: __param_target
        - source_labels: [__param_target]
          target_label: instance
        - target_label: __address__
          replacement: monitoring-prometheus-blackbox-exporter:9115

I really spent much time doing many tests and with no luck. Any suggestions or ideas?

thanks in advance!

LuisMLGDev avatar Jul 12 '21 10:07 LuisMLGDev

I ran into a similar problem, since we use istio. I am still trying to figure out if there is a way to bypass istio based on hostnames, so that I don't have to hardcode ips in the destination rules of istio.

For my one case, I switched to the ssh_banner tcp probe, which can be found in the example config.

xbglowx avatar Aug 15 '21 15:08 xbglowx

Hi xbglowx! Thank you for the feedback. I'm using istio too so It makes sense we share the issue. I'm gonna look into that and try to find a workaround. ssh_banner is not for me I don't have ssh access to those servers :( I will keep this post updated

Thanks again!

LuisMLGDev avatar Aug 16 '21 09:08 LuisMLGDev

I have the same problem, whether the port is listening or not, it always returns success

b-onigam avatar Aug 24 '21 07:08 b-onigam

This would mean that istio is taking over all tcp connections and what you see is a success while connecting to the MITM istio proxy, is that correct?

roidelapluie avatar Aug 24 '21 08:08 roidelapluie

Yes, since you add the envoy sidecar Istio takes over all the TCP connectivities. I'm pretty sure that there is a way to configure the Istio-proxy to pass through some particular connections but I didn't have time to look into that. For me, the work-around was to add a pod annotation to avoid the envoy sidecar be installed. It's not ideal I know but it's working and for now is enough.

LuisMLGDev avatar Aug 24 '21 11:08 LuisMLGDev

This issue can probably be closed, since it is not a bug in blackbox-exporter. Although, maybe there should be a note about using blackbox tcp with a proxy?

xbglowx avatar Aug 25 '21 14:08 xbglowx

Same problem , but i didt'n use isto,in my env , i use ipmasq as a deamonset. any ideas ?

missthesky avatar Nov 22 '21 03:11 missthesky