Environment variables are not available when trying to import resources whilst using the remote backend
Terraform Version
Terraform v0.12.15
+ provider.aws v2.35.0
Terraform Configuration Files
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "myorg"
workspaces {
name = "myworkspace"
}
}
}
provider "aws" {}
# rest irrelevant
Expected Behavior
As I've stored AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables in my workspace, I should be able to use them whilst doing terraform import like I can for terraform plan.
Actual Behavior
I receive an error that the provider is not configured correctly (as the environment variables configured in my workspace are not available):
on /Users/shoekstra/git/myorg/myworkspace/terraform/main.tf line 1, in provider "aws":
1: provider "aws" {}
The argument "region" is required, but no definition was found.
I can successfully import a resource if I prepend the command with the variables, e.g.
AWS_DEFAULT_REGION=eu-west-1 AWS_SECRET_ACCESS_KEY=... AWS_ACCESS_KEY_ID=... terraform import ...
Steps to Reproduce
- Create a resource using a provider that stores credentials in the workspace environment variables
terraform state rm <name of resource>- Attempt to import that resource without the environment variables present in local shell
Hi @shoekstra!
The behavior you've described is, unfortunately, as intended by the current design. Environment variables configured for a Terraform Cloud workspace are there to configure how Terraform runs in the remote execution environment, which is separate from when you run Terraform locally.
It's unlikely that this will change as you describe because that would involve the CLI being able to access the values of sensitive environment variables in the remote workspace like AWS_SECRET_ACCESS_KEY, which is not allowed by the API -- sensitive environment variables are write-only.
For the moment we're going to use this issue to represent the more general idea that ideally the remote backend should be able to run other operations like importing or state rm remotely too, because that's likely the only way we could make what you requested work without violating the write-only rule for sensitive variables. There is no short-term plan to do that, but it is a limitation we're aware of and would like to address eventually.
In the meantime, the intended usage pattern is indeed that for local operations you will need to obtain suitable values for the environment variables out of band and configure them on your local system. In many environments, the credentials that an individual would use to run a local operation are different than the ones that have been configured for Terraform Cloud itself to use. For example, they might be configured to have only read access to the target account and specifically-controlled access to update the workspace state snapshots in S3, thus ensuring that only Terraform Cloud remote runs can be used for to make changes to infrastructure, and thus any real infrastructure changes are captured as runs in the Terraform Cloud UI.
Thanks for the prompt reply!
Being new to using the remote backend I made the assumption the state subcommands were also being run remotely but, as they're not, what you've said makes sense and I agree with the rationale behind current implementation.
Started bumping into this issue today too, being able to use the remote backend to import resources would be helpful. I'd assumed the import would take place entirely remotely, just as a terraform plan does (which avoids exposing the sensitive environment variables) when using the remote backend. I was quite surprised when it didn't already work this way.
It's not a huge issue, but it does make things slightly more complicated than they need to be when importing resources.
We have just run into this as well. To say that this is a disappointment is an understatement. A big reason we purchased Terraform Enterprise was so that we could allow more of our staff to make changes through an approval process without them having access to credentials. With all the various compliance we follow this is a necessity.
There are a lot of resources that we managed in Ansible or not at all which we want to manage with Terraform. The inability to run terraform import without credentials is a real problem for us.
I'm having a bit of trouble using your workaround, which is an issue since there's no other way to import into the remote backend that I know of.
Terraform ignores my local environment variable credentials, and using -var-file or -var doesn't work either, which is an even bigger issue in my opinion than this feature not being implemented.
However I do fully agree this is an essential feature for Terraform Cloud to be viable for many users. The documentation on this is misleading since it claims that the remote backend supports the import command yet the variable docs say best practice is to set the 'sensitive' flag for provider credentials, making the import command unusable in workspaces which use provider credentials (every workspace surely on Terraform Cloud surely).
bumping this issue for attention. We don't want to distribute and have engineers maintain credentials locally.
IMO import should run on the cloud remote runners as well
Bumping this as well.
bumping
Thanks for your interest in this issue! This is just a reminder to please avoid "+1" comments, and to use the upvote mechanism (click or add the 👍 emoji to the original post) to indicate your support for this issue. This is how we prioritize issues for roadmap planning. Thanks again for the feedback!
Are there any plans to address this? We cannot ask people to get creds for an easy import. At least you should offer an import functionality in the UI so that it runs in a remote runner.
This is a pretty old issue that predates a lot of later features that make it possible to do various operations through the normal plan/apply workflow instead of via separate imperative commands.
In case it's useful to folks who are trying to do these things with HCP Terraform today, here's some relevant documentation:
-
Import existing resources to state describes HCP Terraform's features for importing.
-
Refactor modules describes how you can tell Terraform about historical changes you made to the addresses of resource instances so that it can automatically migrate objects in the state as part of a normal plan/apply round.
-
Remove a resource from state describes a way to get a similar effect as
terraform state rmusing configuration language features. -
You can tell Terraform to replace an existing object, even if the configuration does not seem to require it to be replaced, by manually starting a HCP Terraform run and expanding "additional planning options" to find the option to specify resource instance addresses to replace.
Alternatively, if you are using remote operations through Terraform CLI then you can use the
-replace=ADDRoption when you runterraform applyto get the same effect.
As far as I know it's still true that the local-execution-only commands can't work without locally-configured credentials, but HCP Terraform now offers various other ways to achieve similar effects through its own UI or API, or via config-based equivalents of the local-exec-only commands, which might allow you to get the result you need in a slightly different way.
I mention all of this just in case it's useful to anyone who is watching this issue. I don't mean to imply that this necessarily covers everything that this issue was originally about, since it seems like different participants all had slightly different ideas about exactly what features this issue was discussing.