fluent-plugin-s3
fluent-plugin-s3 copied to clipboard
s3 input is not separating log entries
Describe the bug
Using s3 input from cloudwatch logs (gzipped json) to send to opensearch, and all entries from each log file in s3 are being put into one entry in opensearch.
To Reproduce
Setup fluentd using below config, with proper permissions on s3 and sqs.
Expected behavior
Each log entry should be parsed into separate entries in opensearch.
Your Environment
- Fluentd version:1.16-1
- TD Agent version:
- fluent-plugin-s3 version: latest
- aws-sdk-s3 version:
- aws-sdk-sqs version:
- Operating system:
- Kernel version:
Your Configuration
<source>
@type s3
s3_bucket S3_BUCKET_NAME
s3_region us-west-2
add_object_metadata true
format json
<sqs>
queue_name SQS_QUEUE_NAME
</sqs>
</source>
<match **>
@type opensearch
host OPENSEARCH_HOST
port 9200
user %{OPENSEARCH_USER}
password OPENSEARCH_PASSWORD
scheme https
include_timestamp true
logstash_format true
logstash_prefix OS_INDEX_NAME
suppress_type_name true
ssl_verify false
include_tag_key true
tag_key _key
</match>
Your Error Log
No applicable errors being shown.
Additional context
No response
@tfmm didi u find any solution?
Not yet, but I haven't been looking into it recently.
On Mon, Aug 7, 2023, 10:48 valentinacala @.***> wrote:
@tfmm https://github.com/tfmm didi u find any solution?
— Reply to this email directly, view it on GitHub https://github.com/fluent/fluent-plugin-s3/issues/420#issuecomment-1668012234, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUP2GRMZDEIEFI2JSIFLYLXUD55HANCNFSM6AAAAAAW4BTYNU . You are receiving this because you were mentioned.Message ID: @.***>