terraform-aws-secure-baseline
terraform-aws-secure-baseline copied to clipboard
data.aws_subnets.default.ids known only after apply
Describe the bug
When running terraform plan, the following message is thrown 17 times. This only occurs when used with aws_organizations_organization
│ Error: Invalid for_each argument
│
│ on .terraform/modules/secure_baseline/modules/vpc-baseline/main.tf line 13, in data "aws_subnet" "default":
│ 13: for_each = toset(data.aws_subnets.default.ids)
│ ├────────────────
│ │ data.aws_subnets.default.ids is a list of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be
│ determined until apply, so Terraform cannot predict how many instances will
│ be created. To work around this, use the -target argument to first apply
│ only the resources that the for_each depends on.
This appears to be once per provider excluding aws
in the providers
block.
Versions
- Terraform: v1.2.4
- Provider: 4.2.0
- Module: 1.1.0
Reproduction
- Use the configuration in the additional context section
- Setup a remote backend using terraform cloud
- Add
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
as variables to workspace with appropriate rights to carry out all tasks - Run
terraform init
- Run
terraform plan
Expected behavior
Expected plan to complete
Actual behavior
The following message is thrown 17 times. This appears to be once per provider excluding aws
in the providers
block. This only occurs when used with aws_organizations_organization
│ Error: Invalid for_each argument
│
│ on .terraform/modules/secure_baseline/modules/vpc-baseline/main.tf line 13, in data "aws_subnet" "default":
│ 13: for_each = toset(data.aws_subnets.default.ids)
│ ├────────────────
│ │ data.aws_subnets.default.ids is a list of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be
│ determined until apply, so Terraform cannot predict how many instances will
│ be created. To work around this, use the -target argument to first apply
│ only the resources that the for_each depends on.
Additional context
Configuration
data "aws_caller_identity" "current" {
}
resource "aws_iam_user" "admin" {
name = "my-admin"
}
resource "aws_organizations_organization" "org" {
aws_service_access_principals = [
"access-analyzer.amazonaws.com",
"cloudtrail.amazonaws.com",
"config.amazonaws.com",
]
feature_set = "ALL"
}
module "secure_baseline" {
source = "nozaq/secure-baseline/aws"
version = ">= 1.1.0"
account_type = "master"
member_accounts = var.aws_member_accounts
audit_log_bucket_name = var.audit_s3_bucket_name
aws_account_id = data.aws_caller_identity.current.account_id
region = var.aws_region
support_iam_role_principal_arns = [aws_iam_user.admin.arn]
guardduty_disable_email_notification = true
# Setting it to true means all audit logs are automatically deleted
# when you run `terraform destroy`.
# Note that it might be inappropriate for highly secured environment.
audit_log_bucket_force_destroy = true
providers = {
aws = aws
aws.ap-northeast-1 = aws.ap-northeast-1
aws.ap-northeast-2 = aws.ap-northeast-2
aws.ap-northeast-3 = aws.ap-northeast-3
aws.ap-south-1 = aws.ap-south-1
aws.ap-southeast-1 = aws.ap-southeast-1
aws.ap-southeast-2 = aws.ap-southeast-2
aws.ca-central-1 = aws.ca-central-1
aws.eu-central-1 = aws.eu-central-1
aws.eu-north-1 = aws.eu-north-1
aws.eu-west-1 = aws.eu-west-1
aws.eu-west-2 = aws.eu-west-2
aws.eu-west-3 = aws.eu-west-3
aws.sa-east-1 = aws.sa-east-1
aws.us-east-1 = aws.us-east-1
aws.us-east-2 = aws.us-east-2
aws.us-west-1 = aws.us-west-1
aws.us-west-2 = aws.us-west-2
}
depends_on = [aws_iam_user.admin, aws_organizations_organization.org]
}
Does anyone have a workaround for this? I'm also seeing it with version 2.0.0
For what it's worth, only the master/org management account is throwing the error. Other accounts process correctly
This is how I resolved things:
This issue looked due to the state for each aws_default_vpc
being outdated/incompatible. At this time, there are 17 regions/default vpc resources, which seemed to line up with @WTPascoe's experiences of 17 error messages. I found a more specific error message eventually that looked like:
data could not be decoded from the state: unsupported attribute "ipv4_ipam_pool_id"
To get around that, I removed all of the aws_default_vpc resources from state and re-added each:
terraform state rm "module.secure_baseline.module.vpc_baseline_ap-northeast-2[0].aws_default_vpc.default"
terraform state rm "module.secure_baseline.module.vpc_baseline_ap-northeast-3[0].aws_default_vpc.default"
terraform state rm "module.secure_baseline.module.vpc_baseline_ap-south-1[0].aws_default_vpc.default"
....
terraform import "module.secure_baseline.module.vpc_baseline_ap-northeast-1[0].aws_default_vpc.default" vpc-123
terraform import "module.secure_baseline.module.vpc_baseline_ap-northeast-2[0].aws_default_vpc.default" vpc-456
terraform import "module.secure_baseline.module.vpc_baseline_ap-northeast-3[0].aws_default_vpc.default" vpc-789
terraform import "module.secure_baseline.module.vpc_baseline_ap-south-1[0].aws_default_vpc.default" vpc-4242
...
I then removed all of the aws_default_subnet
resources. I commented out the data blocks and aws_default_subnet block from the vpc-baseline module. Then I ran terraform apply
locally, since I was using a modified module. That allowed things to succeed and sync the state with my account. Finally, I restored the original code of the nozaq module (data blocks and default subnet resources) and ran terraform apply
once more. That allowed the vpc-baseline module to once again manage default subnets.