fluent-plugin-s3 icon indicating copy to clipboard operation
fluent-plugin-s3 copied to clipboard

Log pushed using s3 output plugin throws error "2020-06-16 13:23:35 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=44.62332443147898 slow_flush_log_threshold=20.0 plugin_id="out_s3""

Open shilpasshetty opened this issue 5 years ago • 4 comments

Hi team, Below is my config for td-agent ::

Include config files in the ./config.d directory

@include config.d/*.conf

<match. *> @type s3 @id out_s3 @log_level debug aws_key_id "xx aws_sec_key "xx s3_bucket "xx s3_endpoint "xx s3_region xx s3_object_key_format %Y-%m-%d-%H-%M-%S-%{index}-%{hostname}.%{file_extension} store_as "gzip" time_key time tag_key tag localtime false time_format "%Y-%m-%dT%H:%M:%SZ" time_type string @type json @type file path /var/log/fluentd-buffers/s3.buffer timekey 60 flush_at_shutdown true timekey_wait 10 timekey_use_utc true chunk_limit_size 10m

I am using in_Tail plugin to parse and output plugin s3 ,its consuming 100%CPU. When I checked log I am getting below error .Could anyone please let me know what I am missing here. 2020-06-16 13:24:11 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=35.5713529381901 slow_flush_log_threshold=20.0 plugin_id="out_s3" Version of fluentd "'fluent-plugin-s3' version '1.3.2':" I tried with below options as well.It dint help me @type file path /var/log/fluentd-buffers/s3.buffer timekey 60 flush_interval 30s flush_thread_interval 5 flush_thread_burst_interval 15 flush_thread_count 10 timekey_wait 10 timekey_use_utc true chunk_limit_size 6m buffer_chunk_limit 256m

shilpasshetty avatar Jun 16 '20 13:06 shilpasshetty

2020-06-16 13:24:11 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=35.5713529381901 slow_flush_log_threshold=20.0 plugin_id="out_s3"

This means uploading data to S3 took 35 seconds. I'm not sure why... but if you see this log frequently, check your network or something. Basically, 35 or 44 seconds are very slow.

repeatedly avatar Jun 17 '20 00:06 repeatedly

Yeah but when I tried with multiple worker instance for tail .The CPU issue is solved and also problem solved ,so I am just wandering..

shilpasshetty avatar Jun 18 '20 15:06 shilpasshetty

We see similar issue logs are put to S3 with delayed time..can you tell us how you used the worker concept .Tail supports only one worker .Worker 0-2 does not supported.

smiley-ci avatar Jul 16 '20 03:07 smiley-ci

This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days

github-actions[bot] avatar Jul 06 '21 10:07 github-actions[bot]