If source doesn't send metrics it is not removed from list
If you turn off the source exporter remember last value and will be send it.
On screenshot it 223_40 and 223_41
Could you remove source after it don`t send values?
This is already supported:
To avoid using unbounded memory, metrics will be garbage collected five minutes after they are last pushed to. This is configurable with the
--graphite.sample-expiryflag.
If this is configured but does not work, please re-open this issue with the following information:
- the value of this flag
- the value of the
graphite_sample_expiry_secondsmetric - debug logs of the incoming metrics (redacted as much as you need to)
- the raw metric values from Prometheus (query for
the_metric_name[1h]in the console tab)
in the manual wrote:
To avoid using unbounded memory, metrics will be garbage collected five minutes after they are last pushed to. This is configurable with the --graphite.sample-expiry flag. But now in last docker image this setting (as I thing) are set to 3 hours.
That's odd – we don't set it there at all. How are you running the image? Can you share the whole command you use?
How did you determine that it is 3 hours?
This part of docker stack .yml file:
graphite-exporter: image: prom/graphite-exporter:v0.13.1 ports: - "9108:9108" - "9109:9109" volumes: - graphite-configs:/tmp:ro command: - '--graphite.mapping-config=/tmp/graphite_mapping.conf'
On screenshot you can see time when straight line started (11h) and ended (14h). 14-11=3h
Now I add line
- '--graphite.sample-expiry=1m'
But haven`t result.
What reason?
1 minute may be too little, depending on how often you send samples. What happens when you set it to 5m?
Can you get the raw samples for the last few hours from Prometheus and see what it really knows? Something like temperature{ip="192_168_221_41"}[6h] but you will have to put in your actual metric name and label since I can't see those.
Also, please look at the exporter's metric page, and see what metrics are there when.
What is the query in that dashboard?
If I open http://192.168.59.20:9108/metrics I see
# HELP mpa1000_cpu_usage Graphite metric mpa1000_cpu_usage
# TYPE mpa1000_cpu_usage gauge
mpa1000_cpu_usage{ip="192_168_223_40",job="mpa1000log"} 25.510204315186
# HELP mpa1000_temperature Graphite metric mpa1000_temperature
# TYPE mpa1000_temperature gauge
mpa1000_temperature{ip="192_168_223_40",job="mpa1000log"} 56.722309112549
# HELP mpa1000_video Graphite metric mpa1000_video
# TYPE mpa1000_video gauge
mpa1000_video{ip="192_168_223_40",job="mpa1000log"} 0
so that is only the current one now.
Applying your changes restarts the exporter, which always clears everything. You will need something to appear and disappear to see any effects.
Turned off the source. After 10 minutes the same
graphite_sample_expiry_seconds 300
# HELP graphite_tag_parse_failures Total count of samples with invalid tags
# TYPE graphite_tag_parse_failures counter
graphite_tag_parse_failures 0
# HELP mpa1000_cpu_usage Graphite metric mpa1000_cpu_usage
# TYPE mpa1000_cpu_usage gauge
mpa1000_cpu_usage{ip="192_168_223_40",job="mpa1000log"} 24.489795684814
# HELP mpa1000_temperature Graphite metric mpa1000_temperature
# TYPE mpa1000_temperature gauge
mpa1000_temperature{ip="192_168_223_40",job="mpa1000log"} 60.044410705566
# HELP mpa1000_video Graphite metric mpa1000_video
# TYPE mpa1000_video gauge
mpa1000_video{ip="192_168_223_40",job="mpa1000log"} 0
When you say you "turned off the source", did that include 192.168.223.40? What is the query used in the Grafana dashboard?
Yes, I turned off 192.168.223.40
{
"request": {
"url": "api/ds/query",
"method": "POST",
"data": {
"queries": [
{
"datasource": {
"type": "prometheus",
"uid": "aJMidLhVk"
},
"editorMode": "builder",
"expr": "mpa1000_video",
"legendFormat": "{{ip}}",
"range": true,
"refId": "A",
"queryType": "timeSeriesQuery",
"exemplar": false,
"requestId": "6A",
"utcOffsetSec": 0,
"interval": "",
"datasourceId": 1,
"intervalMs": 900000,
"maxDataPoints": 50
}
],
"range": {
"from": "2023-01-21T10:12:43.188Z",
"to": "2023-01-22T00:31:57.387Z",
"raw": {
"from": "2023-01-21T10:12:43.188Z",
"to": "2023-01-22T00:31:57.387Z"
}
},
"from": "1674295963188",
"to": "1674347517387"
},
"hideFromInspector": false
},
"response": {
"results": {
"A": {
"status": 200,
"frames": [],
"refId": "A"
}
}
}
}
Im having this very issue in version 0.15.0 running in docker Help please!