fluent-bit
fluent-bit copied to clipboard
How to configure mem_buf_limit for fluentbit
Bug Report
Describe the bug
Hi team, I have a query regarding how to configure mem_buf_limit to handle backpressure when logstash is not available. The issue was identified when fluetbit stopped with OOM kill issue when container memory limit was set to 30 Mi and logstash was not available for longer time. Increased container memory and verified with different log input rate. Based on the test result observed below behavior when mem_buf_limit is set to 1MB 1log/sec ------> could observe restart after longer time 100log/sec -----> restart didn't happen 200log/sec ------> restart didn't happen
Analysis - If mem_buf_limit is not filled then while harvesting logs fluentbit is consuming more memory. When mem_limit is reached there is not much variation in container memory (mostly 3 to 4 Mi increase after overlimit happens).
Question: It would be great if there is some way to figure out mem_buf_limit value. Thanks in advance!
To Reproduce
- Rubular link if applicable:
- Example log message if applicable:
{"log":"YOUR LOG MESSAGE HERE","stream":"stdout","time":"2018-06-11T14:37:30.681701731Z"}
- Steps to reproduce the problem:
Expected behavior
Screenshots
Your Environment
- Version used:2.1.10
- Configuration:
- Environment name and version (e.g. Kubernetes? What version?):
- Server type and version:
- Operating System and version:
- Filters and plugins:
Additional context
https://docs.fluentbit.io/manual/administration/buffering-and-storage should cover it. It does depend specifically on your data rates/size plus actual config (e.g. plugins used/filters/etc.) but hopefully the docs give you that info.
Also you're on an old FB version so should update.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.
This issue was closed because it has been stalled for 5 days with no activity.