prometheus-nats-exporter icon indicating copy to clipboard operation
prometheus-nats-exporter copied to clipboard

NATS metrics are missing (docker compose)

Open cecilwei opened this issue 4 years ago • 13 comments

Hi,

I have both NATS server 2.0.0 and exporter 0.4.0 running on the same server. However, the metrics reported from exporter doesn't seem to have variables with prefix 'gnatsd_varz' and thus the dashboard shows nothing.

The following is my docker compose setting. Is there anything I missed? Any suggestion is appreciated. varz.json.txt metrics.txt

services:
  nats-server:
    command:
      - "-p"
      - "4222"
      - "-m"
      - "8222"
      - "-cluster"
      - "nats://0.0.0.0:6222"
    ports:
      - 4222:4222
      - 6222:6222
      - 8222:8222
    image: nats:latest
    container_name: nats-server
  prometheus-nats-exporter:
    image: natsio/prometheus-nats-exporter
    hostname: prometheus-nats-exporter
    command: "-varz http://0.0.0.0:8222"
    ports:
      - "7777:7777"
  prometheus:
    image: prom/prometheus:latest
    hostname: prometheus
    volumes:
      - "./prometheus.yml:/etc/prometheus/prometheus.yml"
    ports:
      - "9090:9090"
  grafana:
    image: grafana/grafana
    hostname: grafana
    ports:
      - "3000:3000"

cecilwei avatar Jul 09 '19 02:07 cecilwei

could you try with this instead: command: "-varz http://127.0.0.1:8222" for the exporter?

wallyqs avatar Jul 10 '19 20:07 wallyqs

could you try with this instead: command: "-varz http://127.0.0.1:8222" for the exporter?

thanks for your reply. I tried but it didn't help. Could it be possible that it is running in docker environment?

cecilwei avatar Jul 11 '19 01:07 cecilwei

maybe you need to add some exposed ports to the config so that it is reachable among the containers

wallyqs avatar Jul 14 '19 18:07 wallyqs

the same issue @cecilwei , did you manage to solve this problem? thanks

artas728 avatar Oct 03 '19 15:10 artas728

You could try exposing the port, e.g.

expose:
      - "7777"

https://docs.docker.com/compose/compose-file/#expose

ColinSullivan1 avatar Oct 03 '19 15:10 ColinSullivan1

@ColinSullivan1 thank you for answer I did it but prometheus-nats-exporter still doesn't see metrics from NATS

artas728 avatar Oct 03 '19 15:10 artas728

Hi,

I have both NATS server 2.0.0 and exporter 0.4.0 running on the same server. However, the metrics reported from exporter doesn't seem to have variables with prefix 'gnatsd_varz' and thus the dashboard shows nothing.

The following is my docker compose setting. Is there anything I missed? Any suggestion is appreciated.

Hello, I think you should use something like this:

nats-exporter: image: synadia/prometheus-nats-exporter:0.6.2 restart: unless-stopped command: "-connz -varz -channelz -serverz -subz http://127.0.0.1:8222" ports: - 127.0.0.1:7777:7777

usero4eg avatar Jun 08 '20 07:06 usero4eg

Same problem here.

daniele-sartiano avatar Jun 21 '20 14:06 daniele-sartiano

I faced the same problem where if nats took a little more time than the nats exporter starts to ping. we never saw the nats metrics again. Then I found this error in logs - [ERR] Error loading metric config from response: Get "http://localhost:8222/routez": dial tcp [::1]:8222: connect: cannot assign requested address I think the problem is here in this line of code.it never retries for any other error except error of "connection refused" https://github.com/nats-io/prometheus-nats-exporter/blob/master/collector/collector.go#L244

A workaround of delaying nats exporter pod till nats are reachable fixed for me. I think adding another check for "cannot assign requested address" error should fix it.

codifierr avatar Jun 29 '20 04:06 codifierr

Same problem here

nats-streaming: image: nats-streaming:latest container_name: nats-streaming hostname: nats restart: unless-stopped ports: - 4222:4222 - 8222:8222 command: ["-m", "8222", "-store", "file", "-dir", "/datastore"] volumes: - ./Nats-Streaming:/datastore nats-exporter: image: synadia/prometheus-nats-exporter:latest command: ["-varz", "http://nats:8222"] hostname: nats-exporter ports: - 7777:7777 depends_on: - nats-streaming prometheus: image: prom/prometheus:latest ports: - "9090:9090" volumes: - "./configs/prometheus.yml:/etc/prometheus/prometheus.yml" depends_on: - nats-exporter grafana: image: grafana/grafana:latest environment: GF_SECURITY_ADMIN_PASSWORD: "mypassword" ports: - "3000:3000" depends_on: - prometheus

Getting the same metrics out as the OP

jphgardner avatar Aug 26 '20 22:08 jphgardner

If anyone stumbles over this problem again:

prometheus-nats-exporter: image: natsio/prometheus-nats-exporter hostname: prometheus-nats-exporter command: "-DV -jsz=all http://nats:8222" ports: - "7777:7777" This should work :)

Bomberman244 avatar Oct 11 '22 10:10 Bomberman244

I know thats post from long time ago but maybe ill help if someone still looks for answer

services:
  n1.example.net:
    container_name: n1
    image: nats:latest
    entrypoint: /nats-server
    command: --name N1 --js --debug --trace --sd /data -p 4222 -m 8222
    networks:
    - test_network
    ports:
    - 4222:4222
    - 6222:6222
    - 8222:8222
    volumes:
    - ./jetstream-cluster/n1:/data

  prometheus-nats-exporter:
    image: synadia/prometheus-nats-exporter
    hostname: prometheus-nats-exporter
    command: "-connz -varz -channelz -serverz -subz http://host.docker.internal:8222"
    ports:
      - "7777:7777"
    networks:
    - test_network
  

this compose config fix the problem of missing metrics. The problem is somehow that inside docker enviroment the exporter dont have access to host machine exposed ports. You need to use either http://host.docker.internal or address with container name for e.g. http://n1:8222 in my example.

nozbieg avatar Aug 01 '23 19:08 nozbieg

@nozbieg thanks for the update, btw we have changed from using synadia/prometheus-nats-exporter some time ago so the new images would be found at natsio/prometheus-nats-exporter org: https://hub.docker.com/layers/natsio/prometheus-nats-exporter/0.12.0/images/sha256-83e157c6f2b2c8c29abb4171d6b99bb9b2a733fc158afffbb388e671de95da5c?context=explore

wallyqs avatar Aug 01 '23 20:08 wallyqs

Yeah I'm just working with right now and changed it too just moments ago. That synidia image had some problems with -jsz

nozbieg avatar Aug 01 '23 20:08 nozbieg