terraform-aws-remote-state-s3-backend icon indicating copy to clipboard operation
terraform-aws-remote-state-s3-backend copied to clipboard

With 0.3.1, when tearing down the module, getting an error on terraform_iam_policy output

Open apogrebnyak opened this issue 4 years ago • 7 comments

Version: 0.3.1

After this change https://github.com/nozaq/terraform-aws-remote-state-s3-backend/commit/7290218e4cb2c702e9c095178239dce6bbb2f185#diff-c09d00f135e3672d079ff6e0556d957d deletion of the module fails with

Error: Invalid index

  on .terraform/modules/remote_state/terraform-aws-remote-state-s3-backend-0.3.1/outputs.tf line 23, in output "terraform_iam_policy":
  23:   value       = var.terraform_iam_policy_create ? aws_iam_policy.terraform[0] : null
    |----------------
    | aws_iam_policy.terraform is empty tuple

The given key does not identify an element in this collection value.

Before that change nothing was accessed by index.

Looks like changing output to this fixes the error

output "terraform_iam_policy" {
  description = "The IAM Policy to access remote state environment."
  value       = var.terraform_iam_policy_create ? (
        length(aws_iam_policy.terraform) == 0 ? null : aws_iam_policy.terraform[0]
      ) : null
}

apogrebnyak avatar Aug 17 '20 13:08 apogrebnyak

Thank you for the report.

Hmm, it couldn't be reproduced on my side. I tried terraform apply then terraform destroy in my testing account, but that was successful without any errors.

nozaq avatar Sep 22 '20 00:09 nozaq

The issue arises with using terraform_iam_policy in outputs of calling code.

Also, just verified that the problem still exists in 0.4.0 version

Here is the one-pager that demonstrates the problem

locals {
  common_prefix = "test-deploy"
}

terraform {
  required_version = ">= 0.12.24"
}

provider "aws" {
  version = ">= 2.65.0"
}

provider "aws" {
  version = ">= 2.65.0"

  alias = "replica"
}

module "remote_state" {
  source = "nozaq/remote-state-s3-backend/aws"
  version = "0.4.0"

  providers = {
    aws         = aws
    aws.replica = aws.replica
  }

  dynamodb_table_name = "${local.common_prefix}-lock"

  noncurrent_version_transitions = []
  noncurrent_version_expiration = {
    days = 90
  }

  state_bucket_prefix = "${local.common_prefix}-bucket-"
  replica_bucket_prefix = "${local.common_prefix}-replica-"

  terraform_iam_policy_name_prefix = "test--state-access-"
}

output "terraform_iam_policy" {
  value = module.remote_state.terraform_iam_policy
  description = "The IAM Policy to access remote state environment."
}

If you comment out the last output statement the error is not raised on destroy

apogrebnyak avatar Sep 22 '20 19:09 apogrebnyak

I'm using TF 0.13.4 and also can't reproduce, even when outputting module.remote_state.terraform_iam_policy.arn. From your code, I wonder if your issue is that the replica is in the same region as the source bucket?

mattwillsher avatar Oct 19 '20 16:10 mattwillsher

I'm using TF 0.13.4 and also can't reproduce, even when outputting module.remote_state.terraform_iam_policy.arn. From your code, I wonder if your issue is that the replica is in the same region as the source bucket?

Is it not supported configuration?

apogrebnyak avatar Oct 19 '20 16:10 apogrebnyak

I've not double checked the actual code, but the README says:

Two providers must point to different AWS regions.

mattwillsher avatar Oct 19 '20 16:10 mattwillsher

Two providers must point to different AWS regions.

What is the issue with pointing to the same region? I think requiring a replica in a different region is an overkill.

apogrebnyak avatar Oct 19 '20 17:10 apogrebnyak

Perhaps, and it may be unrelated to your issue. I quite like the level of overkill in this module, given it's storing the state. Losing state files keeps me awake at night :)

Could you test and see if it does solve your issue?

mattwillsher avatar Oct 19 '20 17:10 mattwillsher