Feature Request: Ability to whitelist/restrict Environment Variables passed to Provider Plugins
Terraform Version
1.x
Use Cases
Currently, when Terraform initializes a provider, it spawns the provider plugin as a child process. By default, this child process inherits the entire environment context of the parent terraform CLI process.
This means that if I run terraform apply, every configured provider (AWS, Azure, Kubernetes, etc.) has visibility into every environment variable present in the shell (e.g., GITHUB_TOKEN, AWS_ACCESS_KEY_ID, ARM_CLIENT_ID), regardless of whether that specific provider needs them.
The current behavior violates the principle of Least Privilege and leads to configuration conflicts in complex environments:
- Security/Leakage: A compromised or malicious provider plugin (or a third-party community provider) theoretically has access to credentials meant for other providers (e.g., a Datadog provider process can read my AWS_SECRET_ACCESS_KEY).
- Implicit Auth Conflicts: When using multiple aliases for the same provider (e.g., aws.prod and aws.dev), accidentally set shell variables (like AWS_PROFILE) can override explicit HCL configurations if the provider precedence logic favors environment variables.
- CI/CD Hygiene: In CI pipelines where many secrets are injected as env vars, it is difficult to isolate which secrets are visible to which step without complex shell wrappers.
Attempted Solutions
For now the only way to do it is by using remote vault ephemeral data
Proposal
I propose adding a meta-argument to the provider block (or a global setting) that allows users to explicitly whitelist which environment variables are passed to the plugin process.
If this list is present, the Terraform Core should filter the environment passed to the go-plugin client, sending only the matching keys.
Proposed Syntax
Option 1 (preferred): Allow-list per provider This would be non-breaking. If omitted, behavior defaults to "inherit all."
provider "aws" {
region = "us-east-1"
# New Feature: Only pass these specific env vars to the plugin process
# The plugin will not see GITHUB_TOKEN or other unrelated vars.
allowed_environment_variables = ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_SESSION_TOKEN"]
}
Option 2: Strict Isolation (Boolean) A flag to strictly disable all env var inheritance, forcing the user to rely solely on input variables defined in the HCL.
provider "google" {
project = "my-project"
# New Feature: The plugin process starts with an empty environment
isolate_environment = true
}
References
No response
Hi @or-shachar,
Thanks for the request! Something to consider here though, is that Terraform does not have the ability to isolate the plugin processes, so while limiting the process environment might be relatively easy, a malicious plugin can still lookup all the environment variables from the Terraform process.
Maybe I'm missing something in the implementation, but what if we run the sub-process inside a sandbox?
Oh yes, it's not technically impossible, I just wanted to point out that simply removing the environment variables does not make it more secure on it's own. As far as a new plugin implementation goes, you need to define what that "sandbox" is, and how to implement it equally across all the supported platforms. Of course the first thing that comes to mind is containerization, but now you have a new runtime dependency which did not previously exist, and is complicated by the fact that the majority of automation pipelines are already containerized and don't have host access.
Having it be under control of the module that happens to contain the provider block also means that an attacker-controlled module could indirectly start an attacker-controlled provider without specifying any of these settings, so it seems like it just moves the third-party-dependency risk through one additional level of indirection. 🤔
Relatedly, there's the problem that Terraform regularly starts provider processes without configuring them[^1] (where "configuring them" is what a provider block represents) and so having this specified on a per-provider-config basis, rather than on a per-provider basis (that is: specifying a rule for all instances of hashicorp/aws, rather than for one specific instance of hashicorp/aws) doesn't seem like it could work reliably.
Given that this is about controlling access to the environment where Terraform is running, perhaps it's more appropriate for settings like this to live in the CLI Configuration so that it can be set up just once by the same person who is setting up the environment variables, specifying an allowlist of environment variables that each provider is allowed to receive across all Terraform configurations run on that system:
provider_isolation {
"hashicorp/aws" = {
allowed_environment_variables = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_SESSION_TOKEN",
]
}
}
Presumably under this design the presence of any provider_isolation block at all would mean that any provider not explicitly mentioned inside gets no environment variables at all, so the operator would need to configure the acceptable environment variables for all of the providers they are actually intending to use.
Of course the problem that all of these processes are currently running in the same security context and can therefore poke into each other's environment tables, read credentials files from disk, ptrace each other, etc still undermines all of this.
It's not clear it's worth introducing a limited isolation mechanism like this without also implementing more extensive isolation, but more extensive isolation is considerably harder to do with the provider ecosystem as it currently exists because today's providers expect to all be running in the same filesystem (so that filepaths can pass from one resource to another) and expect to be able to find provider-specific configuration files like ~/.aws/credentials, so isolating unmodified providers is quite likely to break parts of their documented functionality. 😖
Though I suppose perhaps if the isolation were opt-in with something similar to the CLI configuration setting I described above (but with a much larger scope, presumably also involving specifying a container runtime to use) then at least individuals could decide for themselves whether they're willing to accept provider misbehavior in return for isolation. 🤷♂️
[^1]: for example, terraform validate intentionally avoids "configuring" any providers so that it can run in environments where no credentials are available at all
It's not clear it's worth introducing a limited isolation mechanism like this without also implementing more extensive isolation, but more extensive isolation is considerably harder to do with the provider ecosystem as it currently exists because today's providers expect to all be running in the same filesystem (so that filepaths can pass from one resource to another) and expect to be able to find provider-specific configuration files like ~/.aws/credentials, so isolating unmodified providers is quite likely to break parts of their documented functionality. 😖
Clearly isolating some providers would break them if they rely on external implicit inputs. But there are many providers that are more simply, require discrete number of env variables. This will allow us to isolate them from gaining access to, say, my main cloud accounts.
I agree that this should be opt in.
It's not trivial doing this in some platforms, and I don't know how we feel about implementing features that will be available in Linux only for now, but it would be super valuable for security of our ecosystem.