terragrunt
terragrunt copied to clipboard
AWS Auth for S3 backend enforces the profile and doesn't respect env variables
Hi, terragrunt maintainers! Thank you very much for creating this tool, bringing "DRY" to terraform is a really handy thing! However, I got an issue when I switched my pure terraform-managed infra stack to terragrunt.
The documentation says that terragrunt follows the "standart AWS SDK flow" for the AWS authentication: https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/
As I understand, at first it should try to use environment variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
(if they are provided) and only after that it should fall back to other methods, such as profile in files ~/.aws/config
and ~/.aws/credentials
Problem description
My S3 backend includes the profile
statement. Example:
# backend.tf
# Generated by Terragrunt. Sig: nIlQXj57tbuaRZEa
terraform {
backend "s3" {
profile = "my-profile-dev" # <-- The profile is hardcoded
region = "us-east-1"
bucket = "my-dev-terraform-state"
encrypt = true
key = "dev/my-app"
}
}
In my CI pipeline I don't have any ~/.aws/config
and ~/.aws/credentials
, so the profile is not available. But I have AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
env variables exported. However, both terragrunt init
and terragrunt plan
fails with:
# ...
[terragrunt] 2020/07/31 12:13:30 Generated file /path/to/workdir/.terragrunt-cache/oM9M1EUi8LIr7ypW3ioIlO-9eYo/nY19rdwnNrHEET77DIMGGXqSHiw/infrastructure-modules/my-app/backend.tf.
[terragrunt] 2020/07/31 12:13:36 Error finding AWS credentials (did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables?): NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
[terragrunt] 2020/07/31 12:13:36 Unable to determine underlying exit code, so Terragrunt will exit with error code 1
It seems that terragrunt
forcibly tries to use the specified profile name and doesn't use the provided env variables for S3 backend initialization. If I comment-out the profile
line from the backend config, it works fine.
Other findings
The most interesting part is that pure terraform works just fine in this case. I can switch to the generated temp dir (inside .terragrunt-cache/
) and run terraform init
for the backend with profile hardcoded (but not-existing in ~/.aws/*
). Then it uses the credentials from the env var, without a failure, as expected:
$ cd /path/to/workdir/.terragrunt-cache/oM9M1EUi8LIr7ypW3ioIlO-9eYo/nY19rdwnNrHEET77DIMGGXqSHiw/infrastructure-modules/my-app
$ terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
# <skipped>
Terraform has been successfully initialized!
Questions
Is that a bug, or the expected behavior? If the latter - why does it behave differently than the plain terraform?
Are there any way how I can enforce picking credentials from the env vars and still keep profile name explicitly set in the backend.tf
? We still need it for the manual execution from workstations, where aws credentials are defined in a profile in ~/.aws/config
and ~/.aws/credentials
.
Is that a bug, or the expected behavior? If the latter - why does it behave differently than the plain terraform?
This is sort of expected behavior. Terragrunt uses those parameters to implement internal machinery to autocreate the S3 bucket if it doesn't exist. What you are observing is how terragrunt is setting up its credentials within the binary when calling the AWS API.
With that said, terragrunt should configure its credentials in a way that you can override that with env vars, like terraform. I suspect this routine isn't doing the right thing.
Are there any way how I can enforce picking credentials from the env vars and still keep profile name explicitly set in the backend.tf?
There is a hacky workaround you can do, which is to introduce an env var to disable the hardcoded profile using some conditional logic based on env vars in terragrunt. Something like (in terragrunt):
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite"
}
config = {
profile = (
get_env("TERRAGRUNT_DISABLE_PROFILE", "false") == "true"
? null
: "my-profile-dev"
)
bucket = "my-terraform-state"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "my-lock-table"
}
}
With this, any time you set the env var TERRAGRUNT_DISABLE_PROFILE
, you can remove that profile
line without editing the config.
Thank you, @yorinasub17,
With that said, terragrunt should configure its credentials in a way that you can override that with env vars, like terraform. I suspect this routine isn't doing the right thing.
So, do I understand it correctly, that this is the issue which should be eventually fixed on terragrunt side?
Anyway, thanks for the workaround you suggested. It works fine for now 👍
Hi, it looks like this issue has gone stale, so I'd like to give it a bump. Seems related to https://github.com/gruntwork-io/terragrunt/issues/671 as well.
This is still a bug in Terragrunt v0.31.6.
Given that the AWS documentation Terragrunt links out to, and the AWS provider, both only use this profile and shared credentials file settings if the static creds and environment variables are not found, I'd argue strongly that this is not correct behavior currently.
(We are no longer using the help wanted
and prs-welcome
labels, because ALL issues are open to contributions! We will make a note of this in the repo README.)
FYI: seems like v4 of the AWS terraform provider changed the order in which credentials are evaluated, and now it matches what Terragrunt seems to do - though they both still differ from this AWS documentation 🤷
https://github.com/hashicorp/terraform-provider-aws/issues/25129