Feature request: add lane label to query related metrics
Description
I would like to see the lane label on the prometheus metrics for queries.
This will allow me to visualize the effectiveness and alert based on the usage of lanes.
Specifically when this issue gets tackled/integrated, it will become even more interesting: https://github.com/apache/druid/issues/6993
Motivation
Right now we can't visualize our query lane usage and can only adjust configuration based on guesses or metrics that we would let our developers have our apps emit. In the future we don't want to have to write these metrics on our own, when druid is the more sensible way to write these metrics.
EDIT: It seems that this should already be the case as per the docs here: https://druid.apache.org/docs/latest/operations/metrics/ However, even though i fired a manual query against a datasource with laning set up and got a result row, I couldn't see a druid_query_priority metric coming from either broker or router. What i do see however is a druid_sqlquery_time metric, so my metric collection seems to work fine.
Demo Query used:
curl -X POST http://localhost:56713/druid/v2/sql \
-H "Content-Type: application/json" \
-d '{
"query": "SELECT * FROM wikipedia WHERE __time >= TIMESTAMP '\''2024-02-01 00:00:00'\'' LIMIT 1",
"context": {
"lane": "lane1"
}
}'
Hi @applike-ss ,
I am using version 32.0.0, and when I try running a similar query, I do see the lane information in the LoggingEmitter and PrometheusEmitter.
- Logging Emitter
2025-02-28T18:37:08,718 INFO [sql[8dda3ed0-7b14-4aa3-8ec9-a77cd8468a61]] org.apache.druid.java.util.emitter.core.LoggingEmitter - [metrics] {"feed":"metrics","metric":"query/priority","service":"druid/broker","host":"localhost:8082","type":"scan","version":"32.0.0","value":0,"dataSource":["wikipedia"],"lane":"lane4","timestamp":"2025-02-28T18:37:08.718Z"}
- Prometheus Emitter
- Currently
query/prioritymetric is not listed in the defaultMetrics.json file. - I added the following record to my local file,
"query/priority" : { "dimensions" : ["lane", "dataSource", "type"], "type" : "count", "help": "Query Priority Value."},and now I do see the metric, as well as the lane information.
- Currently
$ curl http://localhost:8085/metrics | grep priority
# HELP druid_query_priority_total Query Priority Value.
# TYPE druid_query_priority_total counter
druid_query_priority_total{dataSource="_wikipedia_",druid_service="druid/broker",host_name="localhost:8082",lane="default",type="segmentMetadata",} 0.0
druid_query_priority_total{dataSource="_wikipedia_",druid_service="druid/broker",host_name="localhost:8082",lane="lane4",type="scan",} 0.0
- The query I used:
$ curl -X POST http://localhost:8082/druid/v2/sql -H "Content-Type: application/json" \
-d '{
"query": "SELECT * FROM wikipedia LIMIT 1",
"context": {
"lane": "lane4"
}
}'
Thank you for your feedback, in fact after adjusting the metrics mapping, i can also see the metric.
I would like to see query count by lane, is that somehow possible?
@applike-ss , glad that you could see the query/priority metric as well.
I would like to see query count by lane, is that somehow possible?
I am afraid, right now it is not. During emission of query/count metric, we do not set the lane dimension, or any other dimension. So, for this metric and other query/*/count metrics, we see only the default dimensions of service and host.
@a2l007 , any additional thoughts?
@ashwintumma23 Can the lane dimension be set in the future? We would like to be able to see the saturation of our lanes.
It would be nice to have additional dimension support forquery/count metrics but I'm not sure if the provider used in QueryCountStatsMonitor can be reused for adding lane based dimensions in its current form. The QueryScheduler may have to be merged with the QueryCountStatsMonitor at some level for this to work.
This issue has been marked as stale due to 280 days of inactivity. It will be closed in 4 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions.
/fresh