terraform-aws-s3-log-storage
terraform-aws-s3-log-storage copied to clipboard
chore(deps): update terraform cloudposse/s3-bucket/aws to v4 (release/v0)
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| cloudposse/s3-bucket/aws (source) | module | major | 3.0.0 -> 4.7.1 |
Release Notes
cloudposse/terraform-aws-s3-bucket (cloudposse/s3-bucket/aws)
v4.7.1
🚀 Enhancements
fix: s3 lambda event notification assignments @mpajuelofernandez (#253)
what
It seems there is a typo kind if error here
dynamic "lambda_function" {
for_each = var.event_notification_details.lambda_list
content {
lambda_function_arn = lambda_function.value.arn
events = lambda.value.events
filter_prefix = lambda_function.value.filter_prefix
filter_suffix = lambda_function.value.filter_suffix
}
}
I think it should be
dynamic "lambda_function" {
for_each = var.event_notification_details.lambda_list
content {
lambda_function_arn = lambda_function.value.arn
events = lambda_function.value.events
filter_prefix = lambda_function.value.filter_prefix
filter_suffix = lambda_function.value.filter_suffix
}
}
why
The S3 notification can not be created unless this is fixed
references
This should fix https://github.com/cloudposse/terraform-aws-s3-bucket/issues/252
🐛 Bug Fixes
fix: s3 lambda event notification assignments @mpajuelofernandez (#253)
what
It seems there is a typo kind if error here
dynamic "lambda_function" {
for_each = var.event_notification_details.lambda_list
content {
lambda_function_arn = lambda_function.value.arn
events = lambda.value.events
filter_prefix = lambda_function.value.filter_prefix
filter_suffix = lambda_function.value.filter_suffix
}
}
I think it should be
dynamic "lambda_function" {
for_each = var.event_notification_details.lambda_list
content {
lambda_function_arn = lambda_function.value.arn
events = lambda_function.value.events
filter_prefix = lambda_function.value.filter_prefix
filter_suffix = lambda_function.value.filter_suffix
}
}
why
The S3 notification can not be created unless this is fixed
references
This should fix https://github.com/cloudposse/terraform-aws-s3-bucket/issues/252
🤖 Automatic Updates
Update terratest to '>= 0.46.0' @osterman (#235)
what
- Update terratest
>= 0.46.0
why
- Support OpenTofu for testing
References
- https://github.com/gruntwork-io/terratest/releases/tag/v0.46.0
- DEV-374 Add opentofu to all our Terragrunt Testing GHA matrix
Migrate new test account @osterman (#248)
what
- Update
.github/settings.yml - Update
.github/chatops.ymlfiles
why
- Re-apply
.github/settings.ymlfrom org level to getterratestenvironment - Migrate to new
testaccount
References
- DEV-388 Automate clean up of test account in new organization
- DEV-387 Update terratest to work on a shared workflow instead of a dispatch action
- DEV-386 Update terratest to use new testing account with GitHub OIDC
Update .github/settings.yml @osterman (#247)
what
- Update
.github/settings.yml - Drop
.github/auto-release.ymlfiles
why
- Re-apply
.github/settings.ymlfrom org level - Use organization level auto-release settings
references
- DEV-1242 Add protected tags with Repository Rulesets on GitHub
Update .github/settings.yml @osterman (#246)
what
- Update
.github/settings.yml - Drop
.github/auto-release.ymlfiles
why
- Re-apply
.github/settings.ymlfrom org level - Use organization level auto-release settings
references
- DEV-1242 Add protected tags with Repository Rulesets on GitHub
v4.7.0
Make sure replica_kms_key_id is truly empty @stephan242 (#244)
references
closes #243
v4.6.0
Addition of S3 bucket event notification resource and Addition of S3 directory optional resource @mayank0202 (#240)
Issue - GH-239
what
This feature will make s3 event notifications which will have 3 options to trigger lambda or queue or topic so we can define a resource from this documentation.
aws_s3_bucket_notification
we also added s3 directory bucket which is a new feature in aws so addition of optional resource can be done if someone needs to use that with the help of terraform
aws_s3_directory_bucket
why
-
Enhanced Event-Driven Architecture: The introduction of S3 event notifications allows the S3 bucket to trigger Lambda functions, SQS queues, or SNS topics. This facilitates seamless integration with other AWS services and enables real-time processing of data, which is crucial for building event-driven architectures.
-
New AWS Feature Adoption: The addition of the aws_s3_directory_bucket resource reflects the latest AWS capabilities, ensuring that our infrastructure is up-to-date with current AWS offerings. This optional resource allows users to leverage new AWS features as they become available, promoting flexibility and future-proofing our Terraform configurations.
-
Improved Flexibility: By providing options to trigger different AWS services (Lambda, SQS, SNS), the solution becomes more versatile, catering to a wide range of use cases and workflows. This flexibility can lead to more efficient and effective data processing pipelines.
-
Reduced Operational Overhead: Automating responses to S3 events using Lambda functions, queues, or topics can significantly reduce manual intervention and operational overhead. This leads to improved efficiency and allows teams to focus on higher-value tasks.
references
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_notification https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_directory_bucket https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html
v4.5.0
feat: Add missed tags @MaxymVlasov (#241)
what
Add tags to resources where they missed
v4.4.0
226: Add Expected Bucket Owner @houserx-ioannis (#238)
what
This PR addresses #226 about not being able to specify expected bucket owner in various S3 resources.
why
From AWS docs:
Because Amazon S3 identifies buckets based on their names, an application that uses an incorrect bucket name in a request could inadvertently perform operations against a different bucket than expected. To help avoid unintentional bucket interactions in situations like this, you can use bucket owner condition. Bucket owner condition enables you to verify that the target bucket is owned by the expected AWS account, providing an additional layer of assurance that your S3 operations are having the effects you intend.
references
v4.3.0
Enforce the usage of modern TLS versions (1.2 or higher) for S3 connections @amontalban (#237)
what
This variables adds a policy to the bucket to deny connections that do not use TLS 1.2 or higher.
why
This is required by our security team.
references
https://repost.aws/knowledge-center/s3-enforce-modern-tls
🚀 Enhancements
Bump github.com/hashicorp/go-getter from 1.7.1 to 1.7.4 in /test/src @dependabot (#230)
Bumps github.com/hashicorp/go-getter from 1.7.1 to 1.7.4.
Release notes
Sourced from github.com/hashicorp/go-getter's releases.
v1.7.4
What's Changed
- Escape user-provided strings in
gitcommands hashicorp/go-getter#483- Fixed a bug in
.netrchandling if the file does not exist hashicorp/go-getter#433Full Changelog: https://github.com/hashicorp/go-getter/compare/v1.7.3...v1.7.4
v1.7.3
What's Changed
- SEC-090: Automated trusted workflow pinning (2023-04-21) by
@hashicorp-tsccrin hashicorp/go-getter#432- SEC-090: Automated trusted workflow pinning (2023-09-11) by
@hashicorp-tsccrin hashicorp/go-getter#454- SEC-090: Automated trusted workflow pinning (2023-09-18) by
@hashicorp-tsccrin hashicorp/go-getter#458- don't change GIT_SSH_COMMAND when there is no sshKeyFile by
@jbardinin hashicorp/go-getter#459New Contributors
@hashicorp-tsccrmade their first contribution in hashicorp/go-getter#432Full Changelog: https://github.com/hashicorp/go-getter/compare/v1.7.2...v1.7.3
v1.7.2
What's Changed
- Don't override
GIT_SSH_COMMANDwhen not needed by@nl-brett-stimehashicorp/go-getter#300Full Changelog: https://github.com/hashicorp/go-getter/compare/v1.7.1...v1.7.2
Commits
268c11cescape user provide string to git (#483)975961fMerge pull request #433 from adrian-bl/netrc-fix0298a22Merge pull request #459 from hashicorp/jbardin/setup-git-envc70d9c9don't change GIT_SSH_COMMAND if there's no keyfile3d5770fMerge pull request #458 from hashicorp/tsccr-auto-pinning/trusted/2023-09-180688979Result of tsccr-helper -log-level=info -pin-all-workflows .e66f244Merge pull request #454 from hashicorp/tsccr-auto-pinning/trusted/2023-09-11e80b3dcResult of tsccr-helper -log-level=info -pin-all-workflows .2d49e24Merge pull request #432 from hashicorp/tsccr-auto-pinning/trusted/2023-04-215ccb39aMake addAuthFromNetrc ignore ENOTDIR errors- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
🤖 Automatic Updates
Bump github.com/hashicorp/go-getter from 1.7.1 to 1.7.4 in /test/src @dependabot (#230)
Bumps github.com/hashicorp/go-getter from 1.7.1 to 1.7.4.
Release notes
Sourced from github.com/hashicorp/go-getter's releases.
v1.7.4
What's Changed
- Escape user-provided strings in
gitcommands hashicorp/go-getter#483- Fixed a bug in
.netrchandling if the file does not exist hashicorp/go-getter#433Full Changelog: https://github.com/hashicorp/go-getter/compare/v1.7.3...v1.7.4
v1.7.3
What's Changed
- SEC-090: Automated trusted workflow pinning (2023-04-21) by
@hashicorp-tsccrin hashicorp/go-getter#432- SEC-090: Automated trusted workflow pinning (2023-09-11) by
@hashicorp-tsccrin hashicorp/go-getter#454- SEC-090: Automated trusted workflow pinning (2023-09-18) by
@hashicorp-tsccrin hashicorp/go-getter#458- don't change GIT_SSH_COMMAND when there is no sshKeyFile by
@jbardinin hashicorp/go-getter#459New Contributors
@hashicorp-tsccrmade their first contribution in hashicorp/go-getter#432Full Changelog: https://github.com/hashicorp/go-getter/compare/v1.7.2...v1.7.3
v1.7.2
What's Changed
- Don't override
GIT_SSH_COMMANDwhen not needed by@nl-brett-stimehashicorp/go-getter#300Full Changelog: https://github.com/hashicorp/go-getter/compare/v1.7.1...v1.7.2
Commits
268c11cescape user provide string to git (#483)975961fMerge pull request #433 from adrian-bl/netrc-fix0298a22Merge pull request #459 from hashicorp/jbardin/setup-git-envc70d9c9don't change GIT_SSH_COMMAND if there's no keyfile3d5770fMerge pull request #458 from hashicorp/tsccr-auto-pinning/trusted/2023-09-180688979Result of tsccr-helper -log-level=info -pin-all-workflows .e66f244Merge pull request #454 from hashicorp/tsccr-auto-pinning/trusted/2023-09-11e80b3dcResult of tsccr-helper -log-level=info -pin-all-workflows .2d49e24Merge pull request #432 from hashicorp/tsccr-auto-pinning/trusted/2023-04-215ccb39aMake addAuthFromNetrc ignore ENOTDIR errors- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
Update release workflow to allow pull-requests: write @osterman (#234)
what
- Update workflow (
.github/workflows/release.yaml) to have permission to comment on PR
why
- So we can support commenting on PRs with a link to the release
Update GitHub Workflows to use shared workflows from '.github' repo @osterman (#233)
what
- Update workflows (
.github/workflows) to use shared workflows from.githubrepo
why
- Reduce nested levels of reusable workflows
Update GitHub Workflows to Fix ReviewDog TFLint Action @osterman (#232)
what
- Update workflows (
.github/workflows) to addissue: writepermission needed by ReviewDogtflintaction
why
- The ReviewDog action will comment with line-level suggestions based on linting failures
Update GitHub workflows @osterman (#231)
what
- Update workflows (
.github/workflows/settings.yaml)
why
- Support new readme generation workflow.
- Generate banners
Bump golang.org/x/net from 0.8.0 to 0.23.0 in /test/src @dependabot (#229)
Bumps golang.org/x/net from 0.8.0 to 0.23.0.
Commits
c48da13http2: fix TestServerContinuationFlood flakes762b58dhttp2: fix tipos in commentba87210http2: close connections when receiving too many headersebc8168all: fix some typos3678185http2: make TestCanonicalHeaderCacheGrowth faster448c44fhttp2: remove clientTesterc7877achttp2: convert the remaining clientTester tests to testClientConnd8870b0http2: use synthetic time in TestIdleConnTimeoutd73acffhttp2: only set up deadline when Server.IdleTimeout is positive89f602bhttp2: validate client/outgoing trailers- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
Use GitHub Action Workflows from `cloudposse/.github` Repo @osterman (#227)
what
- Install latest GitHub Action Workflows
why
- Use shared workflows from
cldouposse/.githubrepository - Simplify management of workflows from centralized hub of configuration
Add GitHub Settings @osterman (#221)
what
- Install a repository config (
.github/settings.yaml)
why
- Programmatically manage GitHub repo settings
Update README.md and docs @cloudpossebot (#218)
what
This is an auto-generated PR that updates the README.md and docs
why
To have most recent changes of README.md and doc from origin templates
Update Scaffolding @osterman (#219)
what
- Reran
make readmeto rebuildREADME.mdfromREADME.yaml - Migrate to square badges
- Add scaffolding for repo settings and Mergify
why
- Upstream template changed in the
.githubrepo - Work better with repository rulesets
- Modernize look & feel
v4.2.0
Added IP-based statement in bucket policy @soya-miyoshi (#216)
what
- Allows users to specify a list of source IP addresses from which access to the S3 bucket is allowed.
- Adds dynamic statement that uses the NotIpAddress condition to deny access from any IP address not listed in the
source_ip_allow_listvariable.
why
Use cases:
- Restricting access to specific physical locations, such as an office or home network
references
v4.1.0
🚀 Enhancements
fix: use for_each instead of count in aws_s3_bucket_logging @wadhah101 (#212)
what
Replaced the count with a for_each inside aws_s3_bucket_logging.default
there's no point in the try since the type is clearly defined as list
why
When the bucket_name within logging attribute is dynamically defined, like in the case of referencing a bucket created by terraform for logging
logging = [
{
bucket_name = module.logging_bucket.bucket_id
prefix = "data/"
}
]
we get this error
For each can work better in this case and will solve the previous error
references
🤖 Automatic Updates
Update README.md and docs @cloudpossebot (#214)
what
This is an auto-generated PR that updates the README.md and docs
why
To have most recent changes of README.md and doc from origin templates
Update README.md and docs @cloudpossebot (#213)
what
This is an auto-generated PR that updates the README.md and docs
why
To have most recent changes of README.md and doc from origin templates
Update README.md and docs @cloudpossebot (#209)
what
This is an auto-generated PR that updates the README.md and docs
why
To have most recent changes of README.md and doc from origin templates
v4.0.1
🐛 Bug Fixes
Fix bug in setting dynamic `encryption_configuration` value @LawrenceWarren (#206)
what
- When trying to create an S3 bucket, the following error is encountered:
Error: Invalid dynamic for_each value
on .terraform/main.tf line 225, in resource "aws_s3_bucket_replication_configuration" "default":
225: for_each = try(compact(concat(
226: [try(rule.value.destination.encryption_configuration.replica_kms_key_id, "")],
227: [try(rule.value.destination.replica_kms_key_id, "")]
228: ))[0], [])
├────────────────
│ rule.value.destination.encryption_configuration is null
│ rule.value.destination.replica_kms_key_id is "arn:aws:kms:my-region:my-account-id:my-key-alias"
Cannot use a string value in for_each. An iterable collection is required.
- This is caused in my case by having
s3_replication_rules.destination.encryption_configuration.replica_kms_key_idset.
why
-
There is a bug when trying to create an S3 bucket, which causes an error that stops the bucket being created
- Basically, there are two attributes that do the same thing (for backwards compatability)
s3_replication_rules.destination.encryption_configuration.replica_kms_key_id(newer)s3_replication_rules.destination.replica_kms_key_id(older)
- There is logic to:
- A) use the newer of these two attributes
- B) fall back to the older of the attributes if it is set and the newer is not
- C) fall back to an empty array if nothing is set
- There is a bug in steps A/B, where by selecting one or the other, we end up with the string value, and not an iterable
- The simplest solution, which I have tested successfully on existing buckets, is to wrap the output of that logic in a list
- Basically, there are two attributes that do the same thing (for backwards compatability)
-
This error is easily replicable by trying
compact(concat([try("string", "")], [try("string", "")]))[0]in the Terraform console, which is a simplified version of the existing logic used above -
The table below demonstrates the possible values of the existing code - you can see the outputs for value 2, value 3, and value 4 are not lists:
| Key | Value 1 | Value 2 | Value 3 | Value 4 |
|---|---|---|---|---|
| newer | null |
"string1" |
null |
"string1" |
| older | null |
null |
"string2" |
"string2" |
| output | [] |
"string1" |
"string2" |
"string1" |
v4.0.0
Bug fixes and enhancements combined into a single breaking release @aknysh (#202)
Breaking Changes
Terraform version 1.3.0 or later is now required.
policy input removed
The deprecated policy input has been removed. Use source_policy_documents instead.
Convert from
policy = data.aws_iam_policy_document.log_delivery.json
to
source_policy_documents = [data.aws_iam_policy_document.log_delivery.json]
Do not use list modifiers like sort, compact, or distinct on the list, or it will trigger an Error: Invalid count argument. The length of the list must be known at plan time.
Logging configuration converted to list
To fix #182, the logging input has been converted to a list. If you have a logging configuration, simply surround it with brackets.
Replication rules brought into alignment with Terraform resource
Previously, the s3_replication_rules input had some deviations from the aws_s3_bucket_replication_configuration Terraform resource. Via the use of optional attributes, the input now closely matches the resource while providing backward compatibility, with a few exceptions.
- Replication
source_selection_criteria.sse_kms_encrypted_objectswas documented as an object with one member,enabled, of typebool. However, it only worked when set to thestring"Enabled". It has been replaced with the resource's choice ofstatusof type String. - Previously, Replication Time Control could not be set directly. It was implicitly enabled by enabling Replication Metrics. We preserve that behavior even though we now add a configuration block for
replication_time. To enable Metrics without Replication Time Control, you must setreplication_time.status = "Disabled".
These are not changes, just continued deviations from the resources:
existing_object_replicationcannot be set.tokento allow replication to be enabled on an Object Lock-enabled bucket cannot be set.
what
- Remove local
local.source_policy_documentsand deprecated variablepolicy(because of that, pump the module to a major version) - Convert
lifecycle_configuration_rulesands3_replication_rulesfrom loosely typed objects to fully typed objects with optional attributes. - Use local
bucket_idvariable - Remove comments suppressing Bridgecrew rules
- Update tests to Golang 1.20
why
- The number of policy documents needs to be known at plan time. Default value of
policywas empty, meaning it had to be removed based on content, which would not be known at plan time if thepolicyinput was being generated. - Closes #167, supersedes and closes #163, and generally makes these inputs easier to deal with, since they now have type checking and partial defaults, meaning the inputs can be much smaller.
- Incorporates and closes #197. Thank you @nikpivkin
- Suppressing Bridgecrew rules Cloud Posse does not like should be done via external configuration so that users of this module can have the option of having those rules enforced.
- Security and bug fixes
explanation
Any list manipulation functions should not be used in count since it can lead to the error:
│ Error: Invalid count argument
│
│ on ./modules/s3_bucket/main.tf line 462, in resource "aws_s3_bucket_policy" "default":
│ 462: count = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || length(var.privileged_principal_arns) > 0 || length(local.source_policy_documents) > 0) ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to
│ first apply only the resources that the count depends on.
Using the local like this
source_policy_documents = var.policy != "" && var.policy != null ? concat([var.policy], var.source_policy_documents) : var.source_policy_documents
would not work either if var.policy depends on apply-time resources from other TF modules.
General rules:
-
When using
for_each, the map keys have to be known at plan time (the map values are not required to be know at plan time) -
When using
count, the length of the list must be know at plan time, the items inside the list are not. That does not mean that the list must be static with the length known in advance, the list can be dynamic and come from a remote state or data sources which Terraform evaluates first during plan, it just can’t come from other resources (which are only known after apply) -
When using
count, no list manipulating functions can be used incount- it will lead to theThe "count" value depends on resource attributes that cannot be determined until applyerror in some cases
v3.1.3
Unfortunately, this change makes count unknown at plan time in certain situations. In general, you cannot use the output of compact() in count.
The solution is to stop using the deprecated policy input and revert to 3.1.2 or upgrade to 4.0.
🚀 Enhancements
Fix `source_policy_documents` combined with `var.policy` being ignored @johncblandii (#201)
what
- Changed
var.source_policy_documentstolocal.source_policy_documentssovar.policyusage was still supported
why
- The ternary check uses
var,source_policy_documentssovar.policybeing combined withvar.source_policy_documentsintolocal.source_policy_documentsdoes not providetruefor the ternary to execute
references
v3.1.2: Fix Public Bucket Creation
What's Changed
- Remove reference to TF_DATA_DIR retained by mistake in #40 by @Nuru in https://github.com/cloudposse/terraform-aws-s3-bucket/pull/181
- Sync .github by @max-lobur in https://github.com/cloudposse/terraform-aws-s3-bucket/pull/183
- Fix linters / Retest on AWS provider V5 by @max-lobur in https://github.com/cloudposse/terraform-aws-s3-bucket/pull/188
- Fix Public Bucket Creation by @rankin-tr in https://github.com/cloudposse/terraform-aws-s3-bucket/pull/194
New Contributors
- @rankin-tr made their first contribution in https://github.com/cloudposse/terraform-aws-s3-bucket/pull/194
Full Changelog: https://github.com/cloudposse/terraform-aws-s3-bucket/compare/3.1.1...3.1.2
v3.1.1
🐛 Bug Fixes
Revert change to Transfer Acceleration from #178 @Nuru (#180)
what
- Revert change to Transfer Acceleration from #178
why
- Transfer Acceleration is not available in every region, and the change in #178 (meant to detect and correct drift) does not work (throws API errors) in regions where Transfer Acceleration is not supported
v3.1.0: Support new AWS S3 defaults (ACL prohibited)
Note: this version introduced drift detection and correction for Transfer Acceleration. Unfortunately, that change prevents deployment of buckets in regions that do not support Transfer Acceleration. Version 3.1.1 reverts that change so that S3 buckets can be deployed by this module in all regions. It does, however, mean that when var.transfer_acceleration_enabled is false, Terraform does not track or revert changes to Transfer Acceleration made outside of this module.
Make compatible with new S3 defaults. Add user permissions boundary. @Nuru (#178)
what
- Make compatible with new S3 defaults by setting S3 Object Ownership before setting ACL and disabling ACL if Ownership is "BucketOwnerEnforced"
- Add optional permissions boundary input for IAM user created by this module
- Create
aws_s3_bucket_accelerate_configurationandaws_s3_bucket_versioningresources even when the feature is disabled, to enable drift detection
why
- S3 buckets with ACLs were failing to be provisioned because the ACL was set before the bucket ownership was changed
- Requested feature
- See #171
references
Always include `aws_s3_bucket_versioning` resource @mviamari (#172)
what
- Always create an
aws_s3_bucket_versioningresource to track changes made to bucket versioning configuration
why
- When there is no
aws_s3_bucket_versioning, the expectation is that the bucket versioning is disabled/suspend for the bucket. If bucket versioning is turned on outside of terraform (e.g. through the console), the change is not detected by terraform unless theaws_s3_bucket_versioningresource exists.
references
- Closes #171
Add support for permission boundaries on replication IAM role @mchristopher (#170)
what
- Adds support for assigning permission boundaries to the replication IAM role
why
- Our AWS environment enforces permission boundaries on all IAM roles to follow AWS best practices with security.
references
🤖 Automatic Updates
Update README.md and docs @cloudpossebot (#164)
what
This is an auto-generated PR that updates the README.md and docs
why
To have most recent changes of README.md and doc from origin templates
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
- [ ] If you want to rebase/retry this PR, check this box
This PR was generated by Mend Renovate. View the repository job log.
/terratest