terraform-aws-ec2-instance icon indicating copy to clipboard operation
terraform-aws-ec2-instance copied to clipboard

ISSUE - Adding EBS Volume forces EC2 Replacement

Open HeikoMR opened this issue 1 year ago β€’ 1 comments

Description

Hello,

We deployed an EC2 Instance via this module and initially configured an additional ebs volume. Now we need to add a second non root volume, but that forces a replacement of the ec2 instance.

We tried to define it additionally in the ebs_block_device, we tried it by adding it via a secondary ebs_volume resource block + attachment and we tried it manually. All three variants lead to the fact that the ec2 want's to recreate itself.

  • [x] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/ [x]
  2. Re-initialize the project root to pull down modules: terraform init [x]
  3. Re-attempt your terraform plan or apply and check if the issue still persists [x]

Versions

  • Module version [Required]:

  • Terraform version: v.1.7.0 + v.1.9.5

  • Provider version(s): Aws - 5.64.0 Module version: 5.6.1

Reproduction Code [Required]

module "xyz" {
  source  = "terraform-aws-modules/ec2-instance/aws"
  version = "5.6.1"

  name                                 = var.name
  instance_type                  = var.instance_type
  ami                                    = var.ami
  key_name                         = aws_key_pair.ec2.key_name
  vpc_security_group_ids  = concat([module.xyz_sg.security_group_id], var.additional_security_group_ids)
  disable_api_termination = var.disable_api_termination
  root_block_device = [{
    encrypted   = true
    kms_key_id  = var.kms_key_arn
    volume_type = "gp3"
    volume_size = var.root_volume_size
  }]

  ebs_block_device = [
    {
      device_name = "/dev/sdf"
      volume_type = "gp3"
      volume_size = var.ebs_volume_size1
      kms_key_id  = var.kms_key_arn
    },
   {
      device_name = "/dev/sdg"
      volume_type = "gp3"
      volume_size = var.ebs_volume_size2
      kms_key_id  = var.kms_key_arn
}
]

  tags = {
    Terraform = "true"
    Backup    = "true"
  }
}

Steps to reproduce the behavior:

  1. Launch Ec2 Instance with single non root ebs volume.
  2. Add another non root ebs volume either in the module itself into the ebs_block_device list or by specyfing it outside of the module as new resource or manually
  3. terraform plan

Expected behavior

The second non root ebs volume should just get created and attached to the instance without recreating the whole instance

Actual behavior

Terraform want's to recreate the whole instance because of the ebs_volumes. We were able to reproduce it in both of our environments (dev/prod).

Terminal Output Screenshot(s)

  # module.xyz.module.xyz.aws_instance.this[0] must be replaced
-/+ resource "aws_instance" "this" {
      ~ arn                                  = "arn:aws:ec2:eu-central-1:xyz:instance/i-xyz" -> (known after apply)
      ~ associate_public_ip_address          = true -> (known after apply)
      ~ availability_zone                    = "eu-central-1a" -> (known after apply)
      ~ cpu_core_count                       = 1 -> (known after apply)
      ~ cpu_threads_per_core                 = 2 -> (known after apply)
      ~ disable_api_stop                     = false -> (known after apply)
      ~ ebs_optimized                        = false -> (known after apply)
      - hibernation                          = false -> null
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      ~ id                                   = "i-xyz" -> (known after apply)
      ~ instance_initiated_shutdown_behavior = "stop" -> (known after apply)
      + instance_lifecycle                   = (known after apply)
      ~ instance_state                       = "running" -> (known after apply)
      ~ ipv6_address_count                   = 0 -> (known after apply)
      ~ ipv6_addresses                       = [] -> (known after apply)
      ~ monitoring                           = false -> (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      ~ placement_partition_number           = 0 -> (known after apply)
      ~ primary_network_interface_id         = "eni-xyz" -> (known after apply)
      ~ private_dns                          = "xyz.eu-central-1.compute.internal" -> (known after apply)
      ~ private_ip                           = "xyz" -> (known after apply)
      ~ public_dns                           = "xyz.eu-central-1.compute.amazonaws.com" -> (known after apply)
      ~ public_ip                            = "xyz" -> (known after apply)
      ~ secondary_private_ips                = [] -> (known after apply)
      ~ security_groups                      = [] -> (known after apply)
      + spot_instance_request_id             = (known after apply)
      ~ subnet_id                            = "subnet-xyz" -> (known after apply)
        tags                                 = {
            "Backup"    = "true"
            "Name"      = "xyz"
            "Terraform" = "true"
        }
      ~ tenancy                              = "default" -> (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
        # (10 unchanged attributes hidden)

      - capacity_reservation_specification {
          - capacity_reservation_preference = "open" -> null
        }

      - cpu_options {
          - core_count       = 1 -> null
          - threads_per_core = 2 -> null
        }

      ~ credit_specification {
          - cpu_credits = "unlimited" -> null
        }

      - ebs_block_device { # forces replacement
          - delete_on_termination = true -> null
          - device_name           = "/dev/sdf" -> null
          - encrypted             = true -> null
          - iops                  = 3000 -> null
          - kms_key_id            = "arn:aws:kms:eu-central-1:xyz:key/xyz" -> null
          - tags                  = {} -> null
          - tags_all              = {} -> null
          - throughput            = 125 -> null
          - volume_id             = "vol-xyz" -> null
          - volume_size           = 400 -> null
          - volume_type           = "gp3" -> null
        }
      + ebs_block_device { # forces replacement
          + delete_on_termination = true
          + device_name           = "/dev/sdf"
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = "arn:aws:kms:eu-central-1:xyz:key/xyz"
          + snapshot_id           = (known after apply)
          + tags_all              = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 400
          + volume_type           = "gp3"
        }
      + ebs_block_device { # forces replacement
          + delete_on_termination = true
          + device_name           = "/dev/sdh"
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = "arn:aws:kms:eu-central-1:xyz:key/xyz"
          + snapshot_id           = (known after apply)
          + tags_all              = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 400
          + volume_type           = "gp3"
        }

      ~ enclave_options {
          ~ enabled = false -> (known after apply)
        }

      - maintenance_options {
          - auto_recovery = "default" -> null
        }

      ~ metadata_options {
          ~ instance_metadata_tags      = "disabled" -> (known after apply)
            # (4 unchanged attributes hidden)
        }

      - private_dns_name_options {
          - enable_resource_name_dns_a_record    = false -> null
          - enable_resource_name_dns_aaaa_record = false -> null
          - hostname_type                        = "ip-name" -> null
        }

      ~ root_block_device {
          ~ device_name           = "/dev/sda1" -> (known after apply)
          ~ iops                  = 3000 -> (known after apply)
          - tags                  = {} -> null
          ~ tags_all              = {} -> (known after apply)
          ~ throughput            = 125 -> (known after apply)
          ~ volume_id             = "vol-xyz" -> (known after apply)
            # (5 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

Additional context

Any workaround for this if there is no fix? Our instance is already in productive usage.

Thanks in advance for your help

HeikoMR avatar Aug 27 '24 06:08 HeikoMR

Are you active in here? Because checking all the other Issues from the last 3months there were autoclosed after 30days by a bot. :(

HeikoMR avatar Aug 30 '24 07:08 HeikoMR

Workaround for anyone affected:

  1. Remove the Instance from statefile. terraform state rm "module.ec2.module.ec2.aws_instance.this[0]"
  2. Update your module config to remove the ebs config from inside the module.
  3. Import the Instance back into the statefile terraform import "module.ec2.module.ec2.aws_instance.this[0]" i-12345
  4. Add an ebs_volume resource block as well as an volume_attachment resource block
  5. Import the existing additional volume into that resource block as well as the volume attachment terraform import "module.ec2.aws_ebs_volume.first" vol-12345 terraform import "module.ec2.aws_volume_attachment.first" /dev/sdh:vol-123451:i-12345
  6. Add an additional ebs_volume resource block for the additional volume you want to creat.
  7. Terraform apply to create the new additional ebs volume.

HeikoMR avatar Sep 03 '24 07:09 HeikoMR

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] avatar Oct 04 '24 00:10 github-actions[bot]

This issue was automatically closed because of stale in 10 days

github-actions[bot] avatar Oct 14 '24 00:10 github-actions[bot]

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar Nov 13 '24 02:11 github-actions[bot]

For reference - this is an upstream provider issue https://github.com/hashicorp/terraform-provider-aws/issues/21806

bryantbiggs avatar Jun 04 '25 22:06 bryantbiggs

This issue has been resolved in version 6.0.0 :tada:

antonbabenko avatar Jun 24 '25 19:06 antonbabenko