fluent-plugin-s3
fluent-plugin-s3 copied to clipboard
Log pushed using s3 output plugin throws error "2020-06-16 13:23:35 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=44.62332443147898 slow_flush_log_threshold=20.0 plugin_id="out_s3""
Hi team, Below is my config for td-agent ::
Include config files in the ./config.d directory
@include config.d/*.conf
<match. *>
@type s3
@id out_s3
@log_level debug
aws_key_id "xx
aws_sec_key "xx
s3_bucket "xx
s3_endpoint "xx
s3_region xx
s3_object_key_format %Y-%m-%d-%H-%M-%S-%{index}-%{hostname}.%{file_extension}
store_as "gzip"
I am using in_Tail plugin to parse and output plugin s3 ,its consuming 100%CPU. When I checked log I am getting below error .Could anyone please let me know what I am missing here.
2020-06-16 13:24:11 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=35.5713529381901 slow_flush_log_threshold=20.0 plugin_id="out_s3"
Version of fluentd "'fluent-plugin-s3' version '1.3.2':"
I tried with below options as well.It dint help me
2020-06-16 13:24:11 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=35.5713529381901 slow_flush_log_threshold=20.0 plugin_id="out_s3"
This means uploading data to S3 took 35 seconds. I'm not sure why... but if you see this log frequently, check your network or something. Basically, 35 or 44 seconds are very slow.
Yeah but when I tried with multiple worker instance for tail .The CPU issue is solved and also problem solved ,so I am just wandering..
We see similar issue logs are put to S3 with delayed time..can you tell us how you used the worker concept .Tail supports only one worker .Worker 0-2 does not supported.
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days