beats
beats copied to clipboard
The large read io when filebeat in small memory
The large read io( 500MB/s) when filebeat in small memory , That status is abnormal. . That return normal (below 10 MB/s) when increase filebeat limit memory.
No error log output filebeat
A phenomenon similar to the one mentioned in this post: https://discuss.elastic.co/t/filebeat-consumes-a-large-amount-of-disk-io-reads-on-a-kubernetes-node/18002
For confirmed bugs, please report:
- Version: 8.14.1
- Operating System: beats/filebeat:8.14.1
- Discuss Forum URL:
- Steps to Reproduce: Modify pod mem limits to 100m
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)
Looking at the post I think this is expected behavior. The 900 open files are because filebeat is keeping the file open until the [close_inactive](https://www.elastic.co/docs/reference/beats/filebeat/filebeat-input-log#filebeat-input-log-close-inactive) timer (default 5 min) expires and because it logs are coming in faster than the logs can be written to kafka. Making the close_inactivesetting lower than the default will close files sooner and increasing the value ofworker` for the kafka output will help speed up getting the logs off the machine so it can keep up with the amount of logs incoming.