prometheus icon indicating copy to clipboard operation
prometheus copied to clipboard

Changing one of gauge metric labels creates two overlapping time series

Open bvasiliev opened this issue 1 year ago • 9 comments

What did you do?

I collect NVMe device metrics using the standard node-exporter. The nvme_device_info metric , with type gauge, has some labels, including device and serial.

Label device may be changed in time for the same serial and the same physical device during host reboot e. g.

What did you expect to see?

I expect all the labels for the same metric to remain consistent over time and source, even if one of them is updated.

What did you see instead? Under which circumstances?

After the source host reboot and the device label change, Prometheus returns two overlapping time series for each with new and old device label values for 3 times:

nvme_device_info afrer device label change

One of the overlapping time series has wrong label pairs that do not correspond to the data source. Metrics in the source text file during label change are consistent and do not contain duplicate values:

$ grep -e nvme_device_info -e nvme_data_units_written_total /var/lib/prometheus/node-exporter/nvme.prom

# HELP nvme_data_units_written_total Number of 512-byte data units written by host, reported in thousands
# TYPE nvme_data_units_written_total counter
nvme_data_units_written_total{device="nvme1n1"} 6.1718292e+07
nvme_data_units_written_total{device="nvme0n1"} 4.0432075e+07
# HELP nvme_device_info Device information
# TYPE nvme_device_info gauge
nvme_device_info{device="nvme1n1",firmware="3B7QCXE7",model="Samsung SSD 960 EVO 250GB",serial="S3ESNX0J913905P"} 1.0
nvme_device_info{device="nvme0n1",firmware="3B7QCXE7",model="Samsung SSD 960 EVO 250GB",serial="S3ESNX0K464894P"} 1.0

This behavior breaks many-to-one group with other metrics and the overall monotonicity. For example, request nvme_data_units_written_total{} * on(device) group_left(serial,model) nvme_device_info{serial="S3ESNX0K464894P"} return broken time series which are impossible to aggregate and rate():

nvme_data_units_written_total group_left after device label change

System information

Linux 6.8.0-35-generic x86_64

Prometheus version

prometheus, version 2.45.3+ds (branch: debian/sid, revision: 2.45.3+ds-2build1)
  build user:       [email protected]
  build date:       20240401-07:14:21
  go version:       go1.22.1
  platform:         linux/amd64
  tags:             unknown

Prometheus configuration file

global:
  scrape_interval:     60s
  evaluation_interval: 60s

Alertmanager version

No response

Alertmanager configuration file

No response

Logs

No response

bvasiliev avatar Jun 20 '24 09:06 bvasiliev

First of all: If a label value changes, it creates a new series, also called a metric. So to get the terminology right, you are creating a new metric (and remove an old one).

Staleness handling in Prometheus should handle exactly this case, i.e. you should not see the overlap. However, as you can see the overlap, my assumption is that your node_exporter instance actually exposes both metrics at the same time for a few minutes. You could check that by manually looking at the /metrics endpoint of your node_exporter during the time of a change. If you see the duplication there, you need to troubleshoot your node exporter next.

beorn7 avatar Jun 26 '24 17:06 beorn7

@beorn7 thank you for attention!

I've checked node_exporter's /metric by running every minute script curl http://localhost:9100/metrics -o - 2>1 | grep -e ^nvme_device_info -e ^node_textfile_scrape_error | ts "[%Y-%m-%d %H:%M:%S]" >> node-exporter-test.txt including the time of the change and didn't find any duplications at all.

prometheus-overlap2 metrics-log

But in this attempt, I made requests in real-time and noticed that, from the moment of the change (reboot) at 18:29:00 until the moment at 18:32:00, on the graph above there was only one series labeled {device="nvme0n1"}. The new series {device="nvme1n1"} appeared in the database's results at 18:33:00 with three retrospective values.

bvasiliev avatar Jun 26 '24 19:06 bvasiliev

Are you using normal scrapes to collect the date, or are you doing something special? It looks like staleness isn't working in your case, and you seem to have delayed ingestion. Are you maybe using the OTel collector in between?

beorn7 avatar Jun 27 '24 08:06 beorn7

In this case, only standard scrapes are used. Prometheus and node_exporter are both installed on the same host from the Ubuntu Noble repo and interact directly.

I'm using remote_write in the config, but the issue occurs in the local prometheus instance (yet another screenshot below). I've just noticed, that in the local database, an overlapping series appears immediately without waiting for a three intervals.

prometheus local

Full configuration for completeness:

global:
  scrape_interval: 1m
  scrape_timeout: 10s
  evaluation_interval: 1m
scrape_configs:
- job_name: prometheus
  honor_timestamps: true
  scrape_interval: 1m
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  enable_http2: true
  static_configs:
  - targets:
    - localhost:9090
- job_name: node
  honor_timestamps: true
  scrape_interval: 1m
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  enable_http2: true
  static_configs:
  - targets:
    - chronos:9100
    - <secret>:9100
    - <secret>:9100
    - <secret>:9100
- job_name: unbound
  honor_timestamps: true
  scrape_interval: 1m
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  enable_http2: true
  static_configs:
  - targets:
    - localhost:9167
remote_write:
- url: <secret>
  remote_timeout: 30s
  basic_auth:
    username: <secret>
    password: <secret>
  follow_redirects: true
  enable_http2: true
  queue_config:
    capacity: 10000
    max_shards: 50
    min_shards: 1
    max_samples_per_send: 2000
    batch_send_deadline: 5s
    min_backoff: 30ms
    max_backoff: 5s
  metadata_config:
    send: true
    send_interval: 1m
    max_samples_per_send: 2000

bvasiliev avatar Jun 27 '24 19:06 bvasiliev

Are you using timestamps in your nvme.prom files?

beorn7 avatar Jun 28 '24 10:06 beorn7

No, I don't use timestamps.

nvme.prom is generated by the prometheus-community/node-exporter-textfile-collector-scripts/nvme_metrics.py without timestamps.

Sample: nvme.prom.gz

bvasiliev avatar Jun 28 '24 11:06 bvasiliev

I have run out of explanations. I cannot imagine that the staleness handling doesn't work as expected in such a vanilla situation (or in other words: I don't think this is a bug in Prometheus).

I strongly suspect something subtle is off in your setup. Feel free to further troubleshoot and provide more evidence here. Maybe somebody else is seeing the same.

beorn7 avatar Jun 28 '24 11:06 beorn7

Maybe we are back at the assumption that your node exporter is indeed exposing all those series at the same time for a short period, and your Prometheus happens to scrape in exactly that moment (while your curl wasn't so "lucky"). Just another wild guess.

beorn7 avatar Jun 28 '24 11:06 beorn7

exposing all those series at the same time for a short period, and your Prometheus happens to scrape in exactly that moment (while your curl wasn't so "lucky")

nvme.prom is written through sponge /usr/share/prometheus-node-exporter-collectors/nvme_metrics.py | sponge /var/lib/prometheus/node-exporter/nvme.prom and I think we can rely on its integrity in time.

An overlaps every time lasts for three intervals/minutes but curl shows a changes immediately. Values did not flap in time not in /metrics curl, not in tsdb. Labels change only once and an interval between changes is constant. I think we can also exclude stochastic processes.

bvasiliev avatar Jun 28 '24 11:06 bvasiliev

Perhaps the node_exporter is being scraped twice during that time period. The target from before the reboot is still being scraped at the same time as the new one. Do node-exporters run in containers?

You might want to examine the prometheus_sd_discovered_targets, prometheus_target_scrape_pool_targets, and node_textfile_mtime_seconds metrics.

I notice that curl is scraping localhost:9100, but Prometheus is pointed to chronos:9100. Perhaps curl doesn’t accurately reflect the scraping. Maybe the target scraped by prometheus exposes the old+new series during that time period.

You can take a look at the scrape_series_added and scrape_samples_scraped metrics.

machine424 avatar Jun 30 '24 20:06 machine424

I've created synthetic metrics that swap labels every 1m, 5m, 15m intervals bvasiliev/prometheus-overlap-test to localize the issue.

I found that, with uninterrupted operation of Prometheus, labels changes are handled as expected.

But the issue occurs when a label is changed while Prometheus is off, and a downtime is shorter than a scrape_interval. So yes, under these conditions the node_exporter is being scraped twice.

Below are the charts showing the Prometheus restart at 09:45 and the shutdown for one minute at 09:55:

1m 5m

bvasiliev avatar Jul 01 '24 12:07 bvasiliev

I hadn’t realized that Prometheus itself was restarted. In such a case, because Prometheus cannot write the staleness markers, I believe it will fall back to the old behavior of "keeping the series for X intervals before dropping it." This behavior seems to be what you’re observing.

By the way, good debugging effort!

machine424 avatar Jul 01 '24 14:07 machine424

I hadn’t realized that Prometheus itself was restarted

Oh, that little detail I didn't highlight :-)

I temporarily masked this with a start delay in the prometheus-node-exporter.service unit:

[Service]
ExecStartPre=/usr/bin/sleep 60

I'm not sure if this staleness behavior is a bug, but I think it may not be uncommon in scenarios where Prometheus is in agent mode.

bvasiliev avatar Jul 01 '24 16:07 bvasiliev

For staleness handling to work across restarts, we needed to reconstruct a lot of state from the scraping… not sure if it is worth the effort.

I'll close this issue as we know now what happens.

A "restart-resilient staleness handling" feature would be quite involved and require a design doc. If somebody wants to work on it, please let us know.

beorn7 avatar Jul 02 '24 17:07 beorn7