remote-state/s3: Use S3 read-after-write consistency for state locking (#27070)
The DynamoDB lock table is now unnecessary and the same approach as the GCS backend can be used here to write the lock file directly to S3.
This should simplify the usage of Terraform's S3 backend as it removes an extra component.
Thanks for this submission. Although I cannot commit to having this PR reviewed at this time, we acknowledge your contribution and appreciate it!
I'll also pass this along to the AWS provider team, who reviews backend changes.
Thanks again for the submission!
Thanks for the PR, @dzeromsk . Unfortunately, according to the AWS documentation on concurrency in the S3 data consistency model (https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html#ConsistencyModel), we are not able to rely on the read-after-write consistency for locking.
The documentation notes that
Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application.
We will reconsider this feature if the S3 consistency guarantees change in the future.
@gdavison you are right, writes to the same key can be a problem but I'm not writing to the same key. My proposal is to do write to a random key and read. As you pointed out in documentation they say read after write is consistent.
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.