fluent-plugin-s3
fluent-plugin-s3 copied to clipboard
Want to use variable in s3_bucket
It seems to be impossible to use variables in s3_bucket.
My use case is to store events in different s3 buckets based on the event tag.
When I'm trying to specify bucket name like this: s3_bucket s3-${tag}
I've also tried to use parts of the tag like ${tag[1]} without success. Is there any option to make it possible with any workaround?
Example configuration:
<match test.**>
@type s3
aws_key_id somekey
aws_sec_key someotherkey
s3_bucket cf-${tag[1]}
check_bucket true
auto_create_bucket true
ssl_verify_peer false
force_path_style true
compute_checksums true
s3_endpoint https://s3.endpoint.local
s3_object_key_format %{path}/${tag[3]}-%H_%{index}.%{file_extension}
path cf/${tag[1]}_${tag[2]}/%Y/%m/%d
time_slice_format %Y%m%d-%H
<buffer tag,time>
@type file
path /opt/buffer/fluent/s3
timekey 3600 # 1 hour partition
timekey_wait 5m
timekey_use_utc false # use utc
chunk_limit_size 5120m
</buffer>
</match>
The event I've tested has TAG like this: test.application1.audit.access
So I expect to have bucket named: cf-application1 and Path: cf/application1_audit/2018/11/14/access-11_0.gz (resolution of variables in Path works fine)
P.S. using td-agent 3.2.1
I've also tried to use parts of the tag like ${tag[1]} without success. Is there any option to make it possible with any workaround?
Currently, no way because out_s3 uses static bucket value in start.
To support dynamic bucket, need to move getting bucket in write and need to use extract_placeholder for it.
https://github.com/fluent/fluent-plugin-s3/blob/3c7b89b637c0688eddf50d08a199171287b3650f/lib/fluent/plugin/out_s3.rb#L213
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days
This issue was automatically closed because of stale in 30 days
I am also facing same issue. I also want to configure the bucket name dynamically. plugin should pick up this bucket name dynamically from records. can someone please help me on this. I got stuck with same issue since long time
Is there any workaround for this?
So you need to configure s3_bucket dynamically from the actual records.
I don't think there is any good way for this. :cry: We need to implement it.
If the patterns are limited, you can prepare multiple match settings, and route the records properly to them using tags or lables.