fluent-plugin-s3
fluent-plugin-s3 copied to clipboard
fluent-plugin-s3 it fail to auto clean up the buffer's files after pushing to S3
I also encountered the problem that the files in the buffer could not be automatically deleted after the push。
<match>
<buffer>
@type file
path /etc/fluentd/temp/moons
timekey_wait 5
timekey 1
chunk_limit_size 256m
</buffer>
time_slice_format %Y%m%d%H
</match>

/assign @@repeatedly
I find it too, so I change to use memory buffer
And my test of file to s3 shows memory buffer (200MB per minute) have 4x throughput than disk buffer (50MB), with gzip
This continues to be an issue is there any more information on this?
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days
Anyone resolved this issue?
It still doesn't work for me. I get buffer size of ~900MB in s3 output plugin. Flush interval is 10min.
Any suggestions are welcome.
I switch to memory buffer and works well https://github.com/fluent/fluent-plugin-s3/issues/339#issuecomment-660093235
Disk space of EKS pods are filling fast when we are using fluentd to s3