Terraform using `-target` on a module does not look up remote states
Terraform Version
0.10.6
Affected Resource(s)
- module
Terraform Configuration Files
in a remote state:
output "staging_default_security_group_id" {
value = "${module.strat_staging_vpc.default_security_group_id}"
}
output "staging_public_subnet_ids" {
value = [
"${module.strat_staging_vpc.public_subnets}",
]
}
output "staging_private_subnet_ids" {
value = [
"${module.strat_staging_vpc.private_subnets}",
]
}
In a top level tf:
data "terraform_remote_state" "strat_ops_vpc" {
backend = "s3"
config {
bucket = "com-strat-terraform"
key = "ops/vpc/terraform.tfstate"
region = "${var.region}"
}
}
module "rds" {
source = "../../../../modules/rds/snapshot"
apply_immediately = "true"
environment = "staging"
instance_class = "db.m3.medium"
skip_final_snapshot = "true"
final_snapshot_identifier = "health-staging-final-5"
name = "health-staging"
security_group_ids = ["${data.terraform_remote_state.strat_ops_vpc.staging_default_security_group_id}"]
snapshot_identifier = "${var.rds_snapshot_identifier}"
subnet_ids = ["${data.terraform_remote_state.strat_ops_vpc.staging_private_subnet_ids}"]
engine_version = "9.5.7"
}
--other modules----
Debug Output
On the tf with outputs, apply works and shows the outputs:
11:55:01 Apply complete! Resources: 0 added, 0 changed, 0 destroyed.[0m
11:55:01 [0m[1m[32m
11:55:01 Outputs:
11:55:01
11:55:01 staging_default_security_group_id = sg-3b3b6f4
11:55:01 staging_nat_enable = true
11:55:01 staging_private_subnet_ids = [
11:55:01 subnet-eae465a,
11:55:01 subnet-2decb31
11:55:01 ]
11:55:01 staging_public_subnet_ids = [
11:55:01 subnet-f6e766e,
11:55:01 subnet-2cecb30
11:55:01 ]
11:55:01 staging_security_group_ids = [
11:55:01 sg-3b3b6f4
11:55:01 ]
Using terraform apply -target=module.rds fails and never attempts to look up the remote state outputs:
11:55:11 [terraform] Running shell script
11:55:12 + cd app/health/webserver/staging
11:55:12 + terraform apply --var elasticache_snapshot_identifier=automatic.health-production-2017-10-27-06-00 --var redshift_snapshot_identifier=rs:stratashift-2017-10-27-05-41-52 --var rds_snapshot_identifier=rds:health-production-2017-10-27-02-05 --var version_label=health-99b7672-2017-10-26T18:14:03.666173 -target=module.rds -target=aws_redshift_cluster.stratashift -no-color
11:55:13 Error running plan: 3 error(s) occurred:
11:55:13
11:55:13 * aws_redshift_subnet_group.stratashift: 1 error(s) occurred:
11:55:13
11:55:13 * aws_redshift_subnet_group.stratashift: Resource 'data.terraform_remote_state.strat_ops_vpc' does not have attribute 'staging_private_subnet_ids.0' for variable 'data.terraform_remote_state.strat_ops_vpc.staging_private_subnet_ids.0'
11:55:13 * module.rds.var.security_group_ids: Resource 'data.terraform_remote_state.strat_ops_vpc' does not have attribute 'staging_default_security_group_id' for variable 'data.terraform_remote_state.strat_ops_vpc.staging_default_security_group_id'
11:55:13 * module.rds.var.subnet_ids: Resource 'data.terraform_remote_state.strat_ops_vpc' does not have attribute 'staging_private_subnet_ids' for variable 'data.terraform_remote_state.strat_ops_vpc.staging_private_subnet_ids'
You can see it never did any look ups. Running the same command with no -target=... works exactly as expected:
13:09:52 [terraform] Running shell script
13:09:52 + cd app/health/webserver/staging
13:09:52 + terraform apply --var elasticache_snapshot_identifier=automatic.health-production-2017-10-27-06-00 --var redshift_snapshot_identifier=rs:stratashift-2017-10-27-02 --var rds_snapshot_identifier=rds:health-production-2017-10-27-02-05 --var version_label=health-22222 -no-color
13:09:53 data.terraform_remote_state.health_beanstalk: Refreshing state...
13:09:53 data.terraform_remote_state.strat_spectrum_iam_role: Refreshing state...
13:09:53 data.terraform_remote_state.strat_ops_vpc: Refreshing state...
13:09:53 data.terraform_remote_state.strat_ops_lambda_to_slack: Refreshing state...
13:09:53 data.aws_ami.redis: Refreshing state...
13:09:53 data.aws_ami.beanstalk: Refreshing state...
13:09:53 aws_iam_role.main-ec2-role: Refreshing state... (ID: health-staging-webserver-ec2)
...logs more lookups...
Expected Behavior
Able to target resources/modules individually
Actual Behavior
What actually happened?
Steps to Reproduce
See debug output
Important Factoids
- remote state has outputs for subnets/etc
- tf failing only fails when targeting the rds module individually, works as expected when not targeted
if you run tf output in the folder containing the config for the remote state, do you see those outputs? If not, you haven't ran an apply or refresh since adding them, so they are not in the remote state file.
The error only appears to happen when the state is empty (the output is from our jenkins build to rebuild staging from nothing)
This seems to work as expected in current testing with the resources all existing, but seems to fail when the state is essentially empty
that makes sense. when terraform queries a remote state, it doesn't run any of the resources, so if there isn't an output block in the statefile, it doesn't know about any outputs.
@bdashrad got around to testing, using terraform refresh --var..... before -target fixes the issue, although its still strange to have to refresh outputs when there arent any before targeting resources
I have same issue here. running TF version 0.11.10. I have run similar target modules and have never came across this issue. Not exactly sure why just started happening. terraform refresh resolve the issue here. Anyways, there should be no need to run a refresh, it should auto refresh when running apply or plan.