amazon-kinesis-streams-for-fluent-bit icon indicating copy to clipboard operation
amazon-kinesis-streams-for-fluent-bit copied to clipboard

Duplicate records when stream throughput limit exceeded

Open adrian-skybaker opened this issue 1 year ago • 3 comments

I'm finding that I see significant numbers of duplicates if I hit throttling on the kinesis stream.

Obviously I realise I want to avoid throttling, but I'm wondering this is expected behaviour? For example, I would expect that even when batching, the plugin would only retry the failed parts of the batch.

If this is not expected, happy to provide more logging if that's helpful (below is warning level and above).

This is using amazon/aws-for-fluent-bit:init-2.28.1 .

Log sample:

2022-09-15T17:47:17.678+12:00 | time="2022-09-15T05:47:17Z" level=warning msg="[kinesis 0] 1/2 records failed to be delivered. Will retry.\n"
2022-09-15T17:47:17.678+12:00 | time="2022-09-15T05:47:17Z" level=warning msg="[kinesis 0] Throughput limits for the stream may have been exceeded."
2022-09-15T17:47:19.103+12:00 | [2022/09/15 05:47:19] [ warn] [engine] failed to flush chunk '1-1663220835.534380470.flb', retry in 11 seconds: task_id=1, input=forward.1 > output=kinesis.1 (out_id=1)
[OUTPUT]
    Name kinesis
    Match service-firelens*
    region ${AWS_REGION}
    stream my-stream-name
    aggregation true
    partition_key container_id
    compression gzip

adrian-skybaker avatar Sep 15 '22 06:09 adrian-skybaker