logstash-output-s3
logstash-output-s3 copied to clipboard
Temporary directory should be locked to one s3 output.
This issue is related to https://github.com/logstash-plugins/logstash-output-s3/issues/143 and discussion in https://github.com/logstash-plugins/logstash-output-s3/pull/144.
Multiple s3 outputs with restore
enabled and default temporary_directory
can lead to a race condition in crash recovery scenario whereby one s3 output picks up, uploads and removes outstanding temporary files from disk and remaining s3 outputs then try to do the same ending up not finding expected files on disk.
Proposed solution: Lock each s3 output to unique temporary directory by default.
An additional side effect of sharing temp directories with multiple s3 output plugin instances is that if the instances are configured to log to different buckets, whichever instance has restore
enabled will upload all the lingering files to its own bucket rather than the bucket associated with the plugin instance that created the file. This means log files can be uploaded to the wrong bucket.
Our fix is to just explicitly set the temporary_directory
to something unique per instance and enable restore
on all plugins, contrary to what the comments suggest.