fluent-bit
fluent-bit copied to clipboard
Allow output plugins to configure a max chunk size
@edsiper and I discussed this recently; opening an issue to track it.
Problem
Many APIs have a limit on the amount of data they can ingest per request. For example, #1187 discusses that the DataDog HTTP API has a 2MB payload limit. A single request is made per flush, and occasionally Fluent Bit can send a chunk which is over 2 MB.
Some APIs have a limit on the number of log messages they can accept per HTTP request. For example, Amazon CloudWatch has a 10,000 log message limit per PutLogEvents call. Amazon Kinesis Firehose and Amazon Kinesis Data Streams have a much smaller batch limit of 500 events.
Consequently, plugins have to implement logic to split a single chunk into multiple requests (or accept that occasionally large chunks will fail to be sent). This becomes troublesome when a single API request fails in the set. If the plugin issues a retry, the whole chunk will get retried. The fractions of the chunk that got successfully uploaded will thus be sent multiple times.
Possible Solutions
Ideal solution: Output Plugins specify a max chunk size
Ideally, plugins should only have to make a single request per flush. This keeps the logic in the plugin very simple and straightforward. The common task of splitting chunks into right-sized pieces could be placed in the core of Fluent Bit.
Each output plugin could give Fluent Bit a max chunk size.
Implementing this would involve some complexity. Fluent Bit should not allocate additional memory to split chunks into smaller pieces. Instead it can pass a pointer to a fraction of chunk to an output, and track when the entire chunk has successfully been sent.
Non-ideal, but easy solution
The most important issue is retries. If each flush had a unique ID associated with it, plugins could internally track whether a flush is a first attempt or a retry, and then track whether the entirety of a chunk had been sent or not.
This is not a good idea, it makes the plugin very complicated; I've included it for the sake of completeness.
Hi @PettitWesley I am wondering that what is the current progress on this issue right now?
@JeffLuoo AFAIK, no work has been done on it yet.
The same problem is with sending gelf over http to Graylog. When Flush in SERVICE is set to 5, than I can see only one message per 5 seconds in Graylog. When I change Flush to 1, then, there is one message per second.
Is there anything I should put in configuration file to have all messages in Graylog? :)
@Robert-turbo were you able to solve this problem somehow?
@ciastooo I started to use tcp output
[OUTPUT]
Name gelf
Match *
Host tcp.yourdomain.com
Port 12201
Mode tls
tls On
tls.verify On
tls.vhost tcp.yourdomain.com
Gelf_Short_Message_Key log
Gelf_Timestamp_Key timestamp
And used Traefik with TCP route (with SSL) in front of graylog https://doc.traefik.io/traefik/routing/providers/docker/#tcp-routers
@PettitWesley Is this problem solved in any recent version ?
@mohitjangid1512 No work has been done AFAIK on chunk sizing for outputs
Alternately, compromising between options 1 and 2, we could write some middleware that handles chunk splitting and chunk fragment retries and wraps the flush function. This could potentially limit changes to AWS plugins, and not require any changes to core code.
The middleware would take parameters flush_function_ptr,chunk_fragment_size,middleware_context and consist of:
- Register chunk in some kind of table or hashmap, along with successful_chunks sent to -1
- Break chunk up into data of size X
- Call flush on a chunk fragment
- If flush is successful, increment successful_chunks. OW return retry/fail
- Return to 3 if remaining chunks exist
- Return success
On retry, the chunk will be looked up and the chunk fragments will be resumed at index successful_chunks+1
This would allow no code to change in each plugin's "flush" function, but an additional function to be created called flush_wrapper that calls some middleware that takes the flush function's pointer, chunk_fragment_size,middleware_context
Just a thought.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.
This issue is still creating problems in some of our workflows.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.
This issue is still creating problems in some of our workflows.
This issue is still creating problems in some of our workflows.
Agreed. Surprised its not implemented in the same manner as fluentD
Would you mind describing how is it implemented in fluentd? I am not familiar with it but I know that this is a very sensitive issue.
Hi @leonardo-albertovich @edsiper @PettitWesley , Fluentd's buffer has configuration to limit chunk size https://docs.fluentd.org/configuration/buffer-section (See chunk_limit_size). Ideally output plugins can have a chunk limit, but I also want a solution for this since it's a massive blocker and can render fluent-bit useless when payload size can become huge for outputs.
hello, I just wonder is it work in process? We also meet this question. We are using fluent-bit and our output is AWS kinesis data stream, and data stream has a limit that one record should be under 1M, and we found chunks larger than 1. And as a result, We are meeting terrible data loss. If the feature is released, Can you remind me please?
Waiting on solution for that also.
Is this ever going to be addressed?