amazon-kinesis-streams-for-fluent-bit
amazon-kinesis-streams-for-fluent-bit copied to clipboard
Duplicate records when stream throughput limit exceeded
I'm finding that I see significant numbers of duplicates if I hit throttling on the kinesis stream.
Obviously I realise I want to avoid throttling, but I'm wondering this is expected behaviour? For example, I would expect that even when batching, the plugin would only retry the failed parts of the batch.
If this is not expected, happy to provide more logging if that's helpful (below is warning level and above).
This is using amazon/aws-for-fluent-bit:init-2.28.1 .
Log sample:
2022-09-15T17:47:17.678+12:00 | time="2022-09-15T05:47:17Z" level=warning msg="[kinesis 0] 1/2 records failed to be delivered. Will retry.\n"
2022-09-15T17:47:17.678+12:00 | time="2022-09-15T05:47:17Z" level=warning msg="[kinesis 0] Throughput limits for the stream may have been exceeded."
2022-09-15T17:47:19.103+12:00 | [2022/09/15 05:47:19] [ warn] [engine] failed to flush chunk '1-1663220835.534380470.flb', retry in 11 seconds: task_id=1, input=forward.1 > output=kinesis.1 (out_id=1)
[OUTPUT]
Name kinesis
Match service-firelens*
region ${AWS_REGION}
stream my-stream-name
aggregation true
partition_key container_id
compression gzip
https://github.com/fluent/fluent-bit/issues/2159#issuecomment-632971665 may be relevant.
Hello do you encouter this error also ?
time="2023-11-17T13:23:29Z" level=error msg="[kinesis 0] The partition key could not be found in the record, using a random string instead"
Hello do you encouter this error also ?
time="2023-11-17T13:23:29Z" level=error msg="[kinesis 0] The partition key could not be found in the record, using a random string instead"
You need to make sure that you actually have defined partition_key as a key in your log messages