prometheus_flask_exporter
prometheus_flask_exporter copied to clipboard
gauge values not changing (remains at 0.0)
I am using PrometheusMetrics (prometheus_flask_exporter) with flask (see code snippet below) to generate gauge type metrics, but its (gauge) value doesn't appear to increase when seen from /metric Prometheus endpoint (i.e., remains at 0.0). What could be the issue? The other types of metrics (i.e., counter, summary, histogram) work fine.
@app.route('/gauge') @metrics.gauge('metric_type', 'gauge') def by_gauge(): return "OK"
Hey, that gauge by default tracks the invocations in progress (e.g. increment by one when entering the function then decrement by one when exiting it) which is why it's 0 unless you have some constant load on it, see https://github.com/rycus86/prometheus_flask_exporter/blob/master/prometheus_flask_exporter/init.py#L551-L560
I see.
In Prometheus documentation, a gauge is defined as a non-monotonic variable where its value can go up or down - https://prometheus.io/docs/concepts/metric_types/#gauge .
So the definition of a gauge in prometheus_flask_exporter is basically the invocations (i.e., requests) in progress and nothing else? i.e., we can't use it to implement a custom gauge based on a call to an external API (i.e., temperature fluctuations --- can be up or down)?
You can get a Gauge
from the underlying library we use and that'll show up on the metrics endpoint like all the "built-in" ones, see https://github.com/prometheus/client_python#gauge
I have tried with the custom metrics(for counting mongo connections) for Gauge. The value remains 0 all the time
from apscheduler.schedulers.background import BackgroundScheduler
from prometheus_flask_exporter.multiprocess import GunicornPrometheusMetrics
mongo_connection_counter = 0.0
def get_mongo_connections():
"""
Function to get Mongo DB Connection from Linux `netstat` command
"""
cmd = 'netstat'
args = ['-an', '|', 'grep', ':27017', '|', 'wc', '-l']
temp = subprocess.Popen([cmd, *args], stdout = subprocess.PIPE)
output = str(temp.communicate())
output = output.split("\n")
output = output[0].split('\\')
res = []
global mongo_connection_counter
for line in output:
res.append(line)
mongo_connections_list = list(filter(lambda x: '27017' in x, res))
mongo_connection_counter = float(len(mongo_connections_list))
log.info(f"MONGO CONNECTION COUNTER: {mongo_connection_counter}")
return mongo_connection_counter
def mongo_cn():
global mongo_connection_counter
return mongo_connection_counter
# Background JOB for mongo connections
sched = BackgroundScheduler(daemon=True)
sched.add_job(get_mongo_connections,'interval',seconds=5)
sched.start()
mongo_connections_gauge = Gauge("mongo_conn_counter", "help", multiprocess_mode='max')
mongo_connections_gauge.set_function(lambda: mongo_cn())```
Is it possible to to get such custom Gauge metrics ?
Hm, yeah, something is definitely broken with Gauges and GunicornPrometheusMetrics
, I think the metrics endpoint is not hooked up with the right registry.
In the meantime, could you check whether it changes anything if you call mongo_connections_gauge.set(mongo_connection_counter)
inside the get_mongo_connections
function instead of setting a function for the Gauge?
Yes, I have tried with the mongo_connections_gauge.set(mongo_connection_counter)
and the result is the same, sticking to 0 all the time.
I also have checked the *.db
files under the PROMETHEUS_MULTIPROC_DIR
folder, it seems the workers with PIDs are not saving any custom Gauge
data to respective *.db
files at all.
The value gauge is not changing with the change in variable it sticks to 0.0 only what should i do ?