graphite_exporter icon indicating copy to clipboard operation
graphite_exporter copied to clipboard

If source doesn't send metrics it is not removed from list

Open q000p opened this issue 2 years ago • 15 comments

If you turn off the source exporter remember last value and will be send it. log On screenshot it 223_40 and 223_41 Could you remove source after it don`t send values?

q000p avatar Jan 25 '23 13:01 q000p

This is already supported:

To avoid using unbounded memory, metrics will be garbage collected five minutes after they are last pushed to. This is configurable with the --graphite.sample-expiry flag.

If this is configured but does not work, please re-open this issue with the following information:

  • the value of this flag
  • the value of the graphite_sample_expiry_seconds metric
  • debug logs of the incoming metrics (redacted as much as you need to)
  • the raw metric values from Prometheus (query for the_metric_name[1h] in the console tab)

matthiasr avatar Jan 26 '23 09:01 matthiasr

in the manual wrote:

To avoid using unbounded memory, metrics will be garbage collected five minutes after they are last pushed to. This is configurable with the --graphite.sample-expiry flag. But now in last docker image this setting (as I thing) are set to 3 hours.

q000p avatar Jan 26 '23 12:01 q000p

That's odd – we don't set it there at all. How are you running the image? Can you share the whole command you use?

matthiasr avatar Jan 26 '23 13:01 matthiasr

How did you determine that it is 3 hours?

matthiasr avatar Jan 26 '23 13:01 matthiasr

This part of docker stack .yml file: graphite-exporter: image: prom/graphite-exporter:v0.13.1 ports: - "9108:9108" - "9109:9109" volumes: - graphite-configs:/tmp:ro command: - '--graphite.mapping-config=/tmp/graphite_mapping.conf' aaaqqq

q000p avatar Jan 26 '23 13:01 q000p

On screenshot you can see time when straight line started (11h) and ended (14h). 14-11=3h

q000p avatar Jan 26 '23 13:01 q000p

Now I add line - '--graphite.sample-expiry=1m' But haven`t result. What reason?

q000p avatar Jan 26 '23 14:01 q000p

1 minute may be too little, depending on how often you send samples. What happens when you set it to 5m?

Can you get the raw samples for the last few hours from Prometheus and see what it really knows? Something like temperature{ip="192_168_221_41"}[6h] but you will have to put in your actual metric name and label since I can't see those.

Also, please look at the exporter's metric page, and see what metrics are there when.

matthiasr avatar Jan 26 '23 14:01 matthiasr

What is the query in that dashboard?

matthiasr avatar Jan 26 '23 14:01 matthiasr

If I open http://192.168.59.20:9108/metrics I see

# HELP mpa1000_cpu_usage Graphite metric mpa1000_cpu_usage
# TYPE mpa1000_cpu_usage gauge
mpa1000_cpu_usage{ip="192_168_223_40",job="mpa1000log"} 25.510204315186
# HELP mpa1000_temperature Graphite metric mpa1000_temperature
# TYPE mpa1000_temperature gauge
mpa1000_temperature{ip="192_168_223_40",job="mpa1000log"} 56.722309112549
# HELP mpa1000_video Graphite metric mpa1000_video
# TYPE mpa1000_video gauge
mpa1000_video{ip="192_168_223_40",job="mpa1000log"} 0

q000p avatar Jan 26 '23 14:01 q000p

so that is only the current one now.

Applying your changes restarts the exporter, which always clears everything. You will need something to appear and disappear to see any effects.

matthiasr avatar Jan 26 '23 14:01 matthiasr

Turned off the source. After 10 minutes the same


graphite_sample_expiry_seconds 300
# HELP graphite_tag_parse_failures Total count of samples with invalid tags
# TYPE graphite_tag_parse_failures counter
graphite_tag_parse_failures 0
# HELP mpa1000_cpu_usage Graphite metric mpa1000_cpu_usage
# TYPE mpa1000_cpu_usage gauge
mpa1000_cpu_usage{ip="192_168_223_40",job="mpa1000log"} 24.489795684814
# HELP mpa1000_temperature Graphite metric mpa1000_temperature
# TYPE mpa1000_temperature gauge
mpa1000_temperature{ip="192_168_223_40",job="mpa1000log"} 60.044410705566
# HELP mpa1000_video Graphite metric mpa1000_video
# TYPE mpa1000_video gauge
mpa1000_video{ip="192_168_223_40",job="mpa1000log"} 0

q000p avatar Jan 26 '23 15:01 q000p

When you say you "turned off the source", did that include 192.168.223.40? What is the query used in the Grafana dashboard?

matthiasr avatar Feb 10 '23 10:02 matthiasr

Yes, I turned off 192.168.223.40

{
  "request": {
    "url": "api/ds/query",
    "method": "POST",
    "data": {
      "queries": [
        {
          "datasource": {
            "type": "prometheus",
            "uid": "aJMidLhVk"
          },
          "editorMode": "builder",
          "expr": "mpa1000_video",
          "legendFormat": "{{ip}}",
          "range": true,
          "refId": "A",
          "queryType": "timeSeriesQuery",
          "exemplar": false,
          "requestId": "6A",
          "utcOffsetSec": 0,
          "interval": "",
          "datasourceId": 1,
          "intervalMs": 900000,
          "maxDataPoints": 50
        }
      ],
      "range": {
        "from": "2023-01-21T10:12:43.188Z",
        "to": "2023-01-22T00:31:57.387Z",
        "raw": {
          "from": "2023-01-21T10:12:43.188Z",
          "to": "2023-01-22T00:31:57.387Z"
        }
      },
      "from": "1674295963188",
      "to": "1674347517387"
    },
    "hideFromInspector": false
  },
  "response": {
    "results": {
      "A": {
        "status": 200,
        "frames": [],
        "refId": "A"
      }
    }
  }
}

q000p avatar Feb 13 '23 08:02 q000p

Im having this very issue in version 0.15.0 running in docker Help please!

omarmarquez avatar Jan 13 '24 05:01 omarmarquez