terraform
terraform copied to clipboard
Proposal: State Encryption
Currently we have several resources that retrieve or generate secrets, and for any where these secrets are used to populate other resources or configure other providers these secrets must necessarily be stored in the state.
Such resources include:
aws_db_cluster(password attribute)azurerm_virtual_machine(machine login passwords)tls_private_keyvault_generic_secret(both managed resource and data source) (#9158)- ...and many other resources that use or produce passwords and keys
This causes some conflict, because Terraform's design originally assumed that the state was just a local cache of some remote data, and was fine to e.g. check into a git repo alongside the configuration, or to publish somewhere for consumption in other downstream Terraform configurations. It can be surprising and troublesome for secret values to show up in Terraform states that are being used in these ways.
Proposal: Encrypt the state at rest
To address this issue in a way that does not significantly increase Terraform's core complexity, I propose that we address this by allowing Terraform state to optionally be encrypted, as a whole, at rest. That is to say that the state file stored on local disk and on the remote storage target would be some sort of ciphertext of the state, and for each operation Terraform would retrieve this and decrypt it in memory only for use during that operation, re-encrypting it before writing any changes.
Encrypting the entire state is a rather blunt instrument, but it has the advantage of allowing the encryption to be orthogonal to other concerns in Terraform, and thus makes it easy to reason about its behavior and understand what is and is not encrypted: all, or nothing.
State Encryption Backends
Terraform already has the concept of a remote state storage backend. This proposal introduces a similar but orthogonal concept alongside that: a state encryption backend.
An encryption backend is responsible for translating from a cleartext state file to an encrypted one and vice-versa. The backend defines exactly what format the encrypted state file is stored in, and its contents are opaque to the rest of Terraform.
This would be enabled along with remote state storage in terraform remote config:
terraform remote config \
-backend="consul" \
-backend-config="address=consul.example.com:8500" \
-backend-config="path=terraform/foo.tfstate" \
-encrypt="vault" \
-encrypt-config="address="https://vault.example.com/" \
-encrypt-config="mount_point=foobaz" \ # "transit" by default
-encrypt-config="key=terraform"
Since encryption backend is orthogonal to storage backend, it's possible to mix and match these as desired. In the above example the data is encrypted using Vault's "transit" secret backend and stored in Consul. Another useful encryption backend would be for Amazon KMS, which serves a similar purpose to Vault's transit backend, and could make a good companion to the "s3" storage backend in an AWS-centric environment.
Introducing such a concept would be a pretty isolated change that would affect only the state management portions of Terraform:
- New implementations of the state interfaces in the "state" package would deal with the encryption and decryption steps, separately from the rest of the mechanism.
terraform remote configneeds to learn two new options:-encryptand-encrypt-config- Optionally, we may choose to inform the remote state storage backends about whether they are being given cleartext or ciphertext, so that they can make any relevant adjustments to stored object metadata. For example, the "s3" backend would presumably use
application/jsonas the Content-Type for cleartext, but might be better to useapplication/octet-streamfor encrypted data.
Storing State in Git
Historically the Terraform docs suggested that storing state files in git was a reasonable way to share them within a team. The "remote state" mechanism has subsequently superseded that, and this proposal considers remote state as the primary mechanism for collaboration and supports encryption of state only in conjunction with remote state.
Moving down this path would be a good opportunity to officially deprecate the suggestion of storing state files in git repositories, and strongly encourage the use of remote state with encryption.
Effect on Remote State Workflow
Terraform maintains a local copy of the remote state as a "cache". With state encryption enabled, this local copy would also be encrypted at rest, and so Terraform would need to retrieve and recrypt both the local cache and the remote persistent storage in order to do comparison/sync operations, including the terraform remote push and terraform remote pull commands along with the various similar implicit actions taken during other commands that read and write Terraform state.
Effect on Remote State as a Collaboration Tool
The terraform_remote_state data source has encouraged the use of state files as a means of passing data from one Terraform configuration to another, effectively creating a DAG of separately-maintained Terraform configs.
This can be a powerful tool for managing complex environments, where one large configuration and associated state would be unwieldy. However, it has the rather odd consequence that the entire state is shared merely to allow another config to retrieve the outputs; downstream consumers of the state necessarily have access to all of the gory details of how these resources are created, even though Terraform entirely ignores them.
A while back I'd proposed #3164 to address a related concern around sharing states for collaboration: that the data I wanted to share was at a different level of abstraction than the configurations that produced it. As @phinze rightfully pointed out in that discussion, that proposal (and indeed the terraform_remote_state data source itself) are really just trying to pass an arbitrary bag of key/value pairs by smuggling it inside a larger data structure.
In the mean time we have implemented the concept of data sources, which make retrieving data a first-class idea in Terraform. My suggestion is that we move away from terraform_remote_state as the primary suggested collaboration tool, and instead use more general intermediaries for passing such data.
Per the discussion in #3164, I've subsequently transitioned all of the "tree of configs" stuff in my employer's world over to using Consul resources, and no longer use terraform_remote_state at all. Instead, the "parent" configurations use the consul_key_prefix resource to write sets of data conveniently to Consul as discrete keys, and then the "child" configurations use the consul_keys data source to retrieve those keys. This change also had the positive consequence of making the same data visible to other consumers beyond Terraform, such as in our use of consul-template to configure applications.
Consul is currently the most compelling way to do this sharing due to the good usability of Terraform's Consul provider. The aws_s3_bucket_object resource and data source could be used similarly, though could perhaps benefit from an analog of consul_key_prefix to enable managing multiple related keys in a convenient and robust way. Similar such backends could include etcd, Google Cloud Storage, and (for things that are secret in nature despite being shared between configs) Vault.
We might choose to allow encrypt and encrypt_config as attributes of the terraform_remote_state data source so that encrypted state can still be read by those who have access to the relevant credentials. I expect the use-cases for such a thing would be pretty narrow and fraught with gotchas, so personally I would always prefer to expose more carefully only the specific attributes that need to be exposed, in a manner that is most appropriate for each attribute.
Effect on "state surgery" to work around Terraform issues
I'm sure most teams managing non-trivial configurations with Terraform have at least once resorted to manually tweaking the contents of a Terraform state to work around some sort of tangle that has either been created by outside config drift or by Terraform itself. On my team we call this "state surgery" and have indeed needed to do it several times over the years for one reason or another.
This sort of process will be made much more difficult with encrypted state files, since it'd no longer possible (or at least, straightforward) to edit in-place the local state cache to "trick" Terraform.
Fortunately, in 0.7 the new terraform state family of commands has significantly reduced the need for manual state surgery. Continued investment in this area to cover other "state surgery" use-cases should remove the need for such manual tweaking, allowing changes to be made somewhat more safely.
References
- #516 originally introduced this problem, and has some good discussion of tradeoffs and use-cases that eventually motivated this proposal.
- #1421 suggests allowing encryption of the state file, which is essentially what this proposal offers.
- Discussed under #9158 the PR that proposes to create a Vault provider.
- #5374 suggested storing state within Vault itself, as an alternative. @phinze made a comment there that seems to suggest something like this proposal is coming, and suggests using S3+KMS via their built-in integration in the mean time.
what is the status here ?
From my point of view, there should be no link between config parameters (passwords) and state files. I still see a need to have the state file encrypted. I also think there should be NO password in the state file, but at least a pointer to where to find the password.
I also don't understant people "sharing" the state file... If you have a need to share something, maybe that's something to be added to Terraform. The state file is an "internal" view of the currently running architecture, it's not a config file.
I totally agree on having providers ressources to pull/push sensitive data from (ex : passwords). Using Vault as an endpoint for it sounds great to me, as it would allow Terraform to use an ENV token to gain access to these data, then use them to deploy, replacing the pointers from the remote state file by the values from the Vault...
The forthcoming version 0.9 contains some reworking of Terraform's handling of states that will, amongst other things, make this easier to implement in a future release. I can't say exactly when that will be (I don't have visibility into the official roadmap) but the technical blockers on this will be much diminished once 0.9 is released.
I suppose it's worth noting that the usage examples in my original proposal here are no longer valid with the changes in 0.9. Instead of configuring encryption on the terraform remote config command line as I showed, the encryption configuration would most likely end up appearing as a new block in the new backend part of the configuration:
terraform {
backend "consul" {
# ...
# HYPOTHETICAL ENHANCEMENT -- NOT YET SUPPORTED
encryption "vault" {
address = "https://vault.example.com/"
mount_point = "foobaz" # "transit" by default
key = "terraform"
}
}
}
Can Vault generic secret store become a separate Terraform backend? We could remove a dependency to Consul then.
@mkuzmin in principle that is possible, but I've seen the Vault team recommend against storing non-trivially-sized things in Vault's generic backend, and instead to use the transit backend to encrypt for storage elsewhere. That recommendation is what this design was based on.
@apparentlymart https://github.com/hashicorp/terraform/issues/9556#issuecomment-277847071 is this supported in 0.9 release
as discussed at Hashidays NY with @phinze I'm also interested in this feature of encrypting state before saving it to remote location. There are already open source projects that allow to encrypt JSON files with AWS KMS keys:
https://github.com/agilebits/sm https://github.com/mozilla/sops
Manual workflow could be:
- save state locally, encrypt
- upload to remote state
on second machine:
- fetch encrypted file
- decrypt and import as local state.
i was considering writing a consul http proxy that you could use as a consul backend for tf. encrypted/decrypted all the values through vault transit. not sure if it's worth it now. depends on the timing. (can i get it ballparked at all without calling in a support contract?) i like the new direction for encryption provider. it's the same adapter pattern but internalized and simplified.
the consul sharing works (should work - good idea!) for my team. but consul could be (or seem like) a barrier to entry in a cross-team situation. if an upstream team already has consul deployed, and remains aware that consul has a rest api, a terraform output -json could just be thrown in alongside encrypted state, and is only a GET away. (super easy to add when using encryption+consul)
but being able replace the terraform_remote_state with essentially an output/input json interface is much more flexible for sharing state(like with ansible). and for consuming, the data.external's config can just be to bash -c cat output.json. (could also be simplified)
this new way to think about remote state is almost burying the lead when it comes to enterprise though. bravo
I'd like to see this feature implemented in core, along with other encryption efforts. In the mean time, I came across this tool: terrahelp
@apparentlymart What's the status on this? Would this get accepted into terraform if I would implement it? Or are there any technical blockers on this?
Hi @simonre!
The architecture of "backends" in Terraform changed significantly since I originally proposed this (which was before I was a HashiCorp employee), so I expect we'll need to do another round of design work before deciding what is the right thing to do here. There has also been some disagreement in subsequent discussions about whether whole-state encryption is actually what's needed here, or whether encryption of specific sensitive values is actually the requirement: whole-state encryption is a pretty blunt instrument, requiring that any particular individual either have access to the entire state (which is required to run Terraform at all) or none of it. With more precision, it may be possible to have different sensitivity levels for different information, and to permit certain operations to complete without access to the sensitive information at all.
In practice, many users have implemented a system functionally equivalent to what I proposed here by selecting a backend that has its own built-in support for encryption at rest, such as S3. If using S3 with its built-in encryption is not sufficient, I doubt that what I proposed here would be sufficient either, since it has much the same security characteristics.
However, if there are some ways that the S3 backend (or any other backends that similarly talk to a store with native support for encryption at rest) could better support that use-case, a good nearer-term change would be to add additional capabilities directly to those backends to better exploit those built-in features. If anyone has any ideas about that I'd encourage opening a separate issue to discuss them before implementation, since for security-related changes in particular it's good to talk through the design to inform the implementation.
Hey, has there been any progress on this issue? We've been using the Consul backend and would like to start integrating our Terraform workflows with Vault, but leaking secrets to the unencrypted Consul backend more or less makes this a no-go.
Thanks for the detailed proposal @apparentlymart, I know this has been open a while. I have a few things to add that may be of some use. Restricting access to cloud storage of choice is probably the best way to get at rest encryption and RBAC controls. There are still situations where bring-your-own-key still makes sense (e.g. non-cloud usage or app.terraform.io). Also, some companies want KEK support on all storage or on highly sensitive storage (which TF state is). A solution could leverage the Sensitive schema property to identify fields that need encryption. Additionally, I know there is a lot of provider work here too. There are a number of resources that store secret properties for use downstream in the DAG but are not really needed in storage. One example I use is azurerm_storage_account which will store the access keys - in reality, these could be read at apply. I'm not sure this flow has support in core terraform, but it may be of value to reduce the exposed surface area on secrets in state.
The problem with at-rest encryption provided by some cloud provider storage services like Azure Storage Accounts is that it is transparent encryption. When the encryption/decryption is transparently handled by the cloud provider it really just becomes a proxy for RBAC. While transparent at-rest encryption provides protection against some threats/risks (eg hard drive is stolen or lost from the cloud provider's facility), it provides no protection against much more common threats (eg trusted operator is phished for their credentials). In this case, Terraform-managed state encryption makes a lot of sense. For example, the encryption key could be securely stored in a CI/CD system, provided to Terraform via configuration, Terraform encrypts the state file before storage, the encrypted blob is stored in the backend provider.
Now an attacker who phishes a trusted operator with RBAC access to the backend can only steal an encrypted blob.
Personally I think that trying to mark and encrypt sensitive-data only is too complex and error-prone, just encrypt it all. Truth is it's all sensitive--do you really want an attacker to have a map of your system?
Putting this protection in Terraform core and encrypting the whole file would mean no longer needing to rely on every provider implementation to get security right. I would love to see @apparentlymart original proposal updated for modern versions of Terraform (the remote state is no longer cached locally, state surgery commands have come a long way--both things simplify the proposal), and implemented soon.
Given that a lot of my focus recently has been on compliance, building in the ability to encrypt and decrypt the tfstate with a given kms key say, doesn't seem like it would be a lot of work and massively reduce the need for wrapper scripts. However, I do see a need for a user to be able to decrypt this state file manually in order to perform state surgery.
I'd love to see this feature in terraform.
I agree with @jamesrcounts. I would essentially want a good way for the entire tfstate file to be an encrypted blob, because storing inside an S3 bucket isn't enough. The S3 bucket could be compromised by a phishing attack or a malicious insider. Essentially having to work around the unencrypted blob through permission controls.
I know https://github.com/opencredo/terrahelp exists, that kind of, makes the encryption of the tfstate a 2 step thing, but I don't see why it couldn't be combined with terraform itself.
terraform init --encrypted etc... and the secret could be from vault, ssm, etc...
Any progress with this?
I have implemented a possible solution that should be able to transparently encrypt state for all remote storage backend providers, see draft PR https://github.com/hashicorp/terraform/pull/28278. Successfully tested it with the azurerm backend today.
I have implemented a possible solution that should be able to transparently encrypt state for all remote storage backend providers, see draft PR #28278. Successfully tested it with the azurerm backend today.
@StephanHCB nice work!
Following up on the discussion on https://github.com/hashicorp/terraform/pull/21558 and a suggestion by @allantargino:
proposal how to implement client side remote state encryption (with prototype code)
The central element of the proposal is the introduction of a state crypto provider that transparently encrypts the statefile contents before they are sent to the remote backend.
// StateCryptoProvider is the interface that must be implemented for a transparent remote state
// encryption layer. It is used to encrypt/decrypt the state payload before writing
// to or after reading from the remote state driver
//
// Note that the encrypted payload must still be valid json, because some remote state drivers
// expect valid json
type StateCryptoProvider interface {
// implement this method to decrypt the encrypted payload
//
// encryptedPayload is a json document passed in as a []byte
//
// if you do not return an error, you MUST ensure you return a json document as
// a []byte, because some statefile storage backends rely on this
Decrypt(encryptedPayload []byte) ([]byte, error)
// implement this method to encrypt the plaintext payload
//
// plaintextPayload is a json document passed in as a []byte
//
// if you do not return an error, you MUST ensure you return a json document as
// a []byte, because some statefile storage backends rely on this
Encrypt(plaintextPayload []byte) ([]byte, error)
}
Since some backend implementations assume the state to be json, any state crypto provider must encode the encrypted state into json.
I wrote the simplest implementation of the interface above, called passthrough. It does exactly what you'd expect, and it is instantiated by default.
Other examples include:
- standalone client side encryption (part of the prototype here)
- client side encryption with a key retrieved from Azure Key Vault (easily pulled out of https://github.com/hashicorp/terraform/pull/21558)
- client side encryption with a key managed by Hashicorp Vault
- ...
Transparent client side remote state encryption and decryption can be introduced for all remote state backends (except the extended backends). The change in existing code is limited to a single place for each direction, namely near the call to Client.Get and Client.Put in states/remote/state.go.
.
When transparently encrypting the state, one must consider these lifecycle events:
- initial encryption
- the way out (decrypting state that is currently encrypted)
- key rotation
- switching state encryption providers (e.g. there is a security issue with one of them, or a wish to migrate)
Proposed solution to handle all these: allow the user to configure two state crypto providers
- a main configuration that is used for encryption and tried first for decryption
- a fallback configuration that is tried for decryption if decryption fails with the main configuration
See this possible implementation.
Then the lifecycle events can be performed as follows:
- initial encryption: set up main configuration, leave fallback configuration blank, then cause a change in state
- decryption: leave main configuration blank, set up fallback configuration, then cause a change in state
- re-encryption (either key rotation or switching encryption provider/versions): set up main configuration for the new desired setup, set up fallback configuration with old setup, then cause a change in state
This of course requires state crypto providers to check the integrity of the decrypted payload, e.g. using a MAC, but this is best practice anyway. Furthermore, all state crypto providers should gracefully read unencrypted state and only print a warning.
.
I propose that this feature should be introduced as an experimental feature for now, and while it is experimental, we should introduce two environment variables
TF_REMOTE_STATE_ENCRYPTION
TF_REMOTE_STATE_DECRYPTION_FALLBACK
To enable the feature, users will need to set the environment variables to a json value that fills this data structure:
// StateCryptoConfig holds the configuration for transparent client-side remote state encryption
type StateCryptoConfig struct {
// select the implementation to use
//
// supported values are (EXAMPLE!!! NOT FULLY IMPLEMENTED YET!!!)
// "client-side/AES256-cfb/SHA256"
// "client-side/AES256-gcm"
// "azure-key-vault/RSA1.5/AES256-gcm"
//
// supplying an unsupported value raises an error
Implementation string `json:"implementation"`
// implementation-specific parameters, such as the key
Parameters map[string]string `json:"parameters"`
}
I would add the appropriate documentation, including a clear notice that the feature is experimental for now.
.
I invite, and very much welcome, feedback from tf maintainers as to anything I might have overlooked or should do differently.
I am working on a project for a large financial institution and we do use Terraform. However, everytime when it comes to security meetings, I have a hard time explaining to them that Terraform does not come with Terraform statefile encryption. Even though we can use Azure Storage encryption to bypass this problem, it only migrates the risk to Azure RBAC as @jamesrcounts put it.
It comes to me as a surprise that statefile encryption has not been prioritised before, even though enterprise clients are important to Terraform. They care a lot about security and encryption. I would like to propose that statefile encryption is implemented natively as an option within Terraform in the way @apparentlymart defined it initially. I think the first step would be to allow the encryption of the whole statefile. In future releases, you implement more granular secret management.
The more granular secret management would look like this. There is still the Terraform statefile with the keys for the secrets. But the values are references to other resources/storages where you store the actual secret. This can be a Azure Key Vault for example. You basically remove secret management entirely out of Terraform. Terraform should not manage secrets but only infrastructure. Terraform only needs a Service Principal to login to Azure Key Vault and retrieve the secret values to use in the statefile. You can do that for all secrets or just for some.
Another feature would be, in addition to that, to allow the targeting of secret value encryption in the statefile. With certain flags or key words in the configuration files, you target which secret value to encrypt within the statefile. Finally, you might end up with the following options:
- Whole statefile is encrypted natively by Terraform
- Statefile is not encrypted but the secret values are references to Azure Key Vault
- Statefile is not encrypted as a whole but only certain secret values are encrypted
To be honest, my recommendation is to go first for the feature of whole statefile encryption since the more granular ones are good to have but not necessary from an enterprise's point of view. They just want the secrets to not be plain text in the statefile and stored somewhere where safe.
Please let me know about the status on this issue. Thank you!
Any news on my pull request that implements remote state encryption?
https://github.com/hashicorp/terraform/pull/28603
@jbardin you joined the PR. Is there something I am supposed to be doing/can do? It's now been over two months of no reaction from maintainers.
@StephanHCB I love your PR, it's far better than what I could have come up with. I've Approved it, but I'm not sure if that's at all meaningful, since I'm not a maintainer.
As @stumyp suggested a nice JSON encryption popular project exist Mozilla SOPS allowing encryption of defined values leaving tfstate json syntactically untached for backends. SOPS is a go language open source project supporting encryption using AWS KMS, GCP KMS, Azure Key Vault, age, and PGP (and at the same time more than one to make decryption robust). SOPS is already integrated with Terraform as resource provider (sops Provider) to allow enrypted secrets in .tf/.tfvars decrypted at runtime (although IMHO a better solution should be trasparent and simple as decrypt var at read time without a dedicated provider and additional terraform resources).
Instead of an all or nothing solution for tfstate encryption it is not feasible an integration of SOPS library or SOPS like logic to achieve selective and automated encryption/decryption of secrets values in the in-memory backend process stream ?
Being multi-encryption also use cases like managing Hashicorp Vault Configuration with Terraform for Static Secrets would be a viable solution (greatly mitigating the chicken & egg secrete protection problem).
Encryption of secrets in tfstate is mandatory as encryption at-rest is a weakness for critical secrets like that of Hashicorp Vault configuration (that is a super critical component for security). Cleartext secrets in tfstate can be leaked with access policy errors but also encrypted at-rest can be read by providers ops. No matter how strong the at-rest encryption is or even if it’s enabled or not, if someone has access to the object he can download it Decrypted - see this security indicent
@Roxyrob I think you make a good point.
But I don't think it matters much because if I were to venture a guess I would say that in Hashicorp's philosophy they "trust" cloud providers. To a degree I can understand that point of view; If you are already running everything in their cloud (VM's, etc.) encrypting your statefile before uploading it isn't going to give you that much extra security. So from their point of view it might help against such one of a kind Amazon incident but other then that not much.
Now obviously this is a lot of conjecture from my part but it is the only way I can explain seeing state file encryption at rest as a low priority. Meaning there is a lack of incentive for them to invest time in this PR or the issue at all. Which is fine, it's opensource and they don't owe anyone anything.
@siepkes I undertand what you suppose but I think data security is primary concern for every tools and Terraform is a so great one that cannot neglet also if there isn't a simple implementation solution.
Probably I'm wrong but IMHO any piece of data especially if sensible by nature as many data in tfstate are or potentially can be, should be managed following security by design and security by default principles. Data Security cannot be leaved to external factors.
Without this basic assumption Terraform will be a great tools to provision infrastructure as soon as can work without secrets.
I cannot see any Cloud solution (HCP included) that can provide applicable safeness that allows data to be saved without an encryption totally end independently in hand of the data producer.
I'm anyway interested to know if a sops like logic (with tfstate JSON values selectively encrypted) can be viable or not.
You can use other backends other than s3. For example, using consul as a backend will be more secure by design.
The issue is that ALL of these are making a "copy" of credentials. That is just wrong right out of the gate. Especially when it's using something like vault - it should be storing a reference to the credentials to be loaded when applied.
If Terraform were operating as a client-server type implementation, where the state had to be independently accessed by a service on some other box in order to apply the state - then yeah, I could understand the current behavior -- but it's being 100% driven by the terraform executable and modules on the client system.
The suggestion to use an encrypted backing store doesn't change the fundamental issue that terraform is duplicating the actual credentials and saving them elsewhere.
@FernandoMiguel consul or every other solution does not change the context. Tfstate is always in cleartext somewhere, and someone can access the file and so secrets inside (at least if you do not take all on a server in a private room detached from networks and always watched).
Sops like logic instead allow you to save JSON file (and so potentially tfstate json too) only with values (all or some) encrypted using e.g. AWS KMS CMK.
Such an approach increse security (and probably sufficient risk mitigation) allowing JSON values only encryption/decryption as a service with master encryption keys never known by someone and accessible by means of IAM and KMS Key policies configurations.
Nothing is perfect in the Chicken and Egg problem world and you can say: "Ehi, AWS operators potentially could extract your master key from KMS" ! AWS use a real better operational protection lowering a lot probability of such scenario and will be responsible of incident (many ops should concur on master key theft, and so on), with the stolen keys ops should also know where to find your encrypted secrets, some of which are in a CVS probably not accessible or not know and so on.
All is possible, but this is much less risky than having cleartext secrets in a file on a cloud storage.
@nneul a tfstate cleartext problem mitigation can be reached if we do not undermining probably basic principles of Terraform behavior and also: having encrypted value in tfatste file (or in other better solution for that purpose - like consul, and so on) can be potencially useful (to share between different terraform configs or different DevOps tools in the pipeline).
I think that in cloud era we cannot avoid that secrets can be "a little out of control" (e.g. we will never have total control on all cloud storage solutions involved in a complex infra/process). What we can do instead is to make our best on data security.
Frankly, I'm quite shocked and surprised that Hashicorp haven't placed a higher priority on providing a mechanism for encryption of the tfstate at rest. In many organisations, these state files contain the 'keys to the kingdom' and a comprehensive map of their infrastructure and should be considered highly sensitive.
Knowing that most large companies use Terraform, and that there's a good chance that many/most will use the Amazon S3 backend, if I were intent on compromising one of them I would expect a pretty good chance of success if I could simply gain read-access to their state bucket. If I were a malicious actor, I would probably start by focussing my efforts there. And it probably wouldn't be hard to find a few companies that have incorrectly configured their bucket ACL, or that have staff with read access inadvertently configured for their user/group or whatever.
However, if I suspected that all I'd find were encrypted state files, and that I'd need to obtain additional keys to get to reveal the full set of 'keys to the kingdom', then that would certainly act as a deterrent at least and I'd probably look for an alternative attack vector.
Another way to look at it is - how would you feel if your plaintext tfstate file got leaked on the DarkWeb? :scream: