vector icon indicating copy to clipboard operation
vector copied to clipboard

Kafka lag metrics gives incorrect value

Open fpytloun opened this issue 1 year ago • 1 comments

A note for the community

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Problem

I am working on vector dashboards and I noticed that kafka lag at some point spiked up probably due to some kafka or Elasticsearch glitch. But everything is processing just fine, it is also not growing, just settled on different level. And I don't see any delay in logs delivery either. Also digged deeper into per-partition metrics and it is similar for all (I wanted to make sure it is not just some partitions being stucked).

This chart shows amount of time (given current processing rate) needed to process all unconsumed messages. It would mean we have 1 hour delay which is not true, we have less than 5 minutes. When I restarted one of vectors, it went back down to 5 minutes level.

image

I tried to confirm it is vector/rdkafka issue by matching with kafka-lag-exporter metrics and it shows correct value.

Another interesting thing I found is metrics for partition_id -1 being a negative value 😯 image

Configuration

No response

Version

0.39.0

Debug Output

No response

Example Data

No response

Additional Context

No response

References

No response

fpytloun avatar Aug 22 '24 13:08 fpytloun

Interesting, thanks for this report @fpytloun. Given the values don't match up with kafka-lag-exporter it does seem like a potential bug in Vector or rust-rdkafka or librdkafka.

Vector just publishes the metrics returned by rust-rdkafka: https://github.com/vectordotdev/vector/blob/fe2cc26a217364d5dd3f8c00289ce45af2446f24/src/internal_events/kafka.rs#L119-L152

I wonder if the measurement librdkafka is using is different than what kafka-lag-exporter measures? librdkafka documents it as:

Difference between (hi_offset or ls_offset) and committed_offset). hi_offset is used when isolation.level=read_uncommitted, otherwise ls_offset.

https://github.com/confluentinc/librdkafka/blob/master/STATISTICS.md

If would be helpful, I could add a log message around here to log the data coming from librdkafka: https://github.com/vectordotdev/vector/blob/fe2cc26a217364d5dd3f8c00289ce45af2446f24/src/kafka.rs#L175-L185

Vector just ends up using the return values to emit metrics:

https://github.com/vectordotdev/vector/blob/master/src/internal_events/kafka.rs#L119-L152

jszwedko avatar Aug 22 '24 22:08 jszwedko

I am digging a bit deeper and I think this is related to consumer group rebalances and partition re-assignments. It seems that if rebalance occurs, original vector instance that was originally consuming topic partition still reports last lag value. Here is an example in deployment where there are two regions and 3 vector replicas in each region. After Kafka hiccup and some reassignments of one of partitions there are two instances that expose metric for this topic partition, one is stale (until that pod is restarted to it stops exposing this metric) and another one shows normal value:

image

Fix is to stop exposing metric of topic partition that is no longer consumed by given vector instance.

Workaround is to pick non-negative (as there is still weird -1 value sometimes) min value of component_id,topic_id,partition_id` like this:

min(vector_kafka_consumer_lag > 0) by (component_id,topic_id,partition_id)

fpytloun avatar Dec 03 '24 09:12 fpytloun

Aaah, interesting, nice find @fpytloun

jszwedko avatar Dec 03 '24 16:12 jszwedko

I think this is caused by #22006

sam6258 avatar Dec 12 '24 04:12 sam6258

v0.44.0 ships newer rdkafka, we should test if this issue is still present.

fpytloun avatar Feb 21 '25 13:02 fpytloun

Seeing this problem with Vector 0.46.1. There is a big discrepancy between values reported by vector_kafka_consumer_lag metric and our internal metric which represents how far behind a consumer group is from the most recent message in a topic partition (which is known to be correct as we're running a large fleet of Kafkas).

pavdmyt avatar May 21 '25 12:05 pavdmyt