Stream metrics to prometheus
Instead of collecting JSON or text output to analyze, let's start streaming all the metrics to promscale for further analysis.
Dashboard example
Annotations example
In the dashboard settings, go to annotations and add a new annotation based on promscale source:
You can also consider repeating the same all the events that are pushed to the gateway: tsbs_run_start, tsbs_run_finish, tsbs_load_start, tsbs_load_finish.
Configuration
I created the following structure for the promscale machine:
mkdir promscale
cd promscale
Create the prometheus.yml file in the folder:
global:
scrape_interval: 10s
evaluation_interval: 10s
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
- job_name: pushgateway
honor_labels: true
static_configs:
- targets: ['pushgateway:9091']
- job_name: node-exporter
static_configs:
- targets: ['tsbs:9100', 'tsbs:9101', 'database:9100']
remote_write:
- url: "http://promscale:9201/write"
remote_read:
- url: "http://promscale:9201/read"
read_recent: true
Now create the docker-compose.yml in the same directory:
version: '3.0'
services:
db:
image: timescaledev/timescaledb-ha:pg12-latest
ports:
- 5432:5432/tcp
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: postgres
prometheus:
image: prom/prometheus:latest
ports:
- 9090:9090/tcp
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
depends_on:
- pushgateway
promscale:
image: timescale/promscale:latest
ports:
- 9201:9201/tcp
restart: on-failure
depends_on:
- db
- prometheus
environment:
PROMSCALE_DB_CONNECT_RETRIES: 10
PROMSCALE_WEB_TELEMETRY_PATH: /metrics-text
PROMSCALE_DB_URI: postgres://postgres:set-a-password-here@db:5432/postgres?sslmode=allow
node_exporter:
image: quay.io/prometheus/node-exporter
ports:
- "9100:9100"
pushgateway:
image: prom/pushgateway
container_name: pushgateway
restart: unless-stopped
expose:
- 9091
ports:
- "9091:9091"
Then you can use docker-compose up -d to start running promscale with prometheus pushgateway too.
Configure hosts
To make it easy to understand what machine do what, use /etc/hosts to create better names for the machines that are streaming the data. In our case, you can see our targets are already using some names:
- targets: ['tsbs:9100', 'tsbs:9101', 'database:9100']
So, just go to /etc/hosts and create the following hosts:
10.0.200.45 tsbs
10.0.200.46 database
Now, you should also ssh both tsbs and database machines and install the node_exporter.
How to test it:
start ./tsbs_load and check the port 9101.
> ~ curl localhost:9101/metrics
In the bottom of the results you should see the new tsbs metrics:
# HELP tsbs_load_metric_count TSBS: The total number of metrics inserted
# TYPE tsbs_load_metric_count counter
tsbs_load_metric_count 4.13e+07
# HELP tsbs_load_rows_count TSBS: The total number of rows ingested
# TYPE tsbs_load_rows_count counter
tsbs_load_rows_count 4.13e+06
If it's running queries the result should be a bit different:
# HELP tsbs_metrics_run_queries Number of queries runs per type
# TYPE tsbs_metrics_run_queries counter
tsbs_metrics_run_queries{is_partial="false",is_warm="false",label="TimescaleDB CPU over threshold, 1 host(s)"} 2159.7264299999997