terragrunt icon indicating copy to clipboard operation
terragrunt copied to clipboard

terragrunt destroy Questions to ask

Open 880831ian opened this issue 1 year ago • 1 comments

Describe the bug Hello, I recently started to convert my terraform to terragrunt for management, but I have encountered some problems in use, I hope I can ask here, if there is a mistake or understanding error, please let me know, thank you

I first put my architecture diagram and the corresponding program.

  • architecture diagram
.
├── README.md
├── modules
│   ├── google_container_cluster
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── google_container_node_pool
│       ├── main.tf
│       └── variables.tf
└── projects
    ├── test-01234
    │   ├── gke
    │   └── test
    │       ├── aaa-node-pool
    │       │   └── terragrunt.hcl
    │       ├── bbb-node-pool
    │       │   └── terragrunt.hcl
    │       └── cluster
    │           └── terragrunt.hcl
    └── terragrunt.hcl

Under the module folder, I am divided into cluster and node_pool according to the resource name, and there are corresponding tf files in it


  • modules/google_container_cluster/main.tf
provider "google" {
  project      = var.project_id
}

resource "google_container_cluster" "cluster" {
  project    = var.project_id
  name     = var.cluster_name
  location = var.cluster_location
  node_locations = var.node_locations
  release_channel {
    channel           = var.release_channel
  }
  min_master_version = var.cluster_version
  network = var.network_name
  subnetwork = var.subnetwork_name
  default_max_pods_per_node = var.node_max_pods
  remove_default_node_pool = var.remove_default_node_pool
  initial_node_count = var.initial_node_count
  enable_intranode_visibility = false
  enable_shielded_nodes = var.enable_shielded_nodes
  resource_labels = var.resource_labels

  maintenance_policy {
    recurring_window {
      start_time = "2023-06-19T02:00:00Z" 
      end_time = "2023-06-19T09:00:00Z"
      recurrence = "FREQ=WEEKLY;BYDAY=TU,TH"
    }
  }

  ip_allocation_policy {
    cluster_secondary_range_name = "gke-pods"
    services_secondary_range_name = "gke-service"
  }

  addons_config {
    http_load_balancing {
      disabled = false
    }
    horizontal_pod_autoscaling {
      disabled = false
    }
    network_policy_config {
      disabled = true
    }
  }

  dynamic "dns_config" {
    for_each = var.dns_enabled ? [1] : []
    content {
      cluster_dns = var.cluster_dns
      cluster_dns_scope = var.cluster_dns_scope
    }
  }

  private_cluster_config {
    enable_private_nodes = true
    enable_private_endpoint = false
    master_ipv4_cidr_block = var.private_cluster_ipv4_cidr
  }

  master_auth {
    client_certificate_config {
      issue_client_certificate = false 
    }
  }

  dynamic "binary_authorization" {
    for_each = var.binary_authorization_enabled ? [1] : []
    content {
      evaluation_mode = var.binary_authorization
    }
  }

  logging_config {
    enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
  }

  monitoring_config {
    enable_components = ["SYSTEM_COMPONENTS"]
  }

  timeouts {}
}

  • modules/google_container_node_pool/main.tf
provider "google" {
  project      = var.project_id
}

resource "google_container_node_pool" "node_pool" {
  name       = var.node_pool_name
  project    = var.project_id
  cluster    = var.cluster_name
  location   = var.cluster_location
  node_count = var.node_count
  version = var.cluster_version
  node_config {
    machine_type = var.node_machine_type
    disk_size_gb = var.node_disk_size
    disk_type    = var.node_disk_type
    image_type   = var.node_image_type
    oauth_scopes    = var.node_oauth_scopes
    spot = var.node_spot
    tags = var.node_tags
    dynamic "taint" {
      for_each = var.node_taint_enabled ? [1] : []
      content {
        key    = var.node_taint_key
        value  = var.node_taint_value
        effect = var.node_taint_effect
      }
    }
  }
  management {
    auto_repair  = var.auto_repair
    auto_upgrade = var.auto_upgrade
  }
  upgrade_settings {
    max_surge       = var.upgrade_max_surge
    max_unavailable = var.upgrade_max_unavailable
    strategy        = var.upgrade_strategy
  }
  dynamic "autoscaling" {
    for_each = var.autoscaling_enabled ? [1] : []
    content {
      max_node_count = var.autoscaling_max_node_count
      min_node_count = var.autoscaling_min_node_count      
      total_max_node_count = var.autoscaling_total_max_node_count
      total_min_node_count = var.autoscaling_total_min_node_count
    }
  }
  timeouts {}
  lifecycle {
    create_before_destroy = true
  }  
}

There will be a terragrunt.hcl under the projects to place remote_state and generate "provider", and partition according to the gcp project. Finally, different resources will be placed in the project, such as gke, gce, etc.

  • projects/terragrunt.hcl
remote_state {
  backend = "gcs"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite"
  }
  config = {
    bucket  = "terraform-state"
    prefix = "test-01234/${path_relative_to_include()}"
  }
}

generate "provider" {
  path = "provider.tf"
  if_exists = "overwrite"
  contents = <<EOF
terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
      version = "~> 4.70.0"
    }
  }
}
EOF
}

inputs = {
  project_id = "gcp-XXX"
}

  • projects/test-01234/gke/test/cluster/terragrunt.hcl
terraform {
  source = "${get_path_to_repo_root()}/modules/google_container_cluster"
}

include {
  path = find_in_parent_folders()
}

inputs = {
  cluster_name             = "test"
  cluster_location         = "asia-east1-b"
  node_locations           = []
  release_channel          = "UNSPECIFIED"
  cluster_version          = "1.26.5-gke.1200"
  network_name             = "XXX"
  subnetwork_name          = "XXX"
  node_max_pods            = 64
  remove_default_node_pool = true
  initial_node_count       = 3
  enable_shielded_nodes    = false
  resource_labels = {
    "dept" : "XXX",
    "env" : "XXX",
    "group" : "XXX",
    "product" : "XXX"
  }
  dns_enabled                  = false
  cluster_dns                  = "PROVIDER_UNSPECIFIED"
  cluster_dns_scope            = "DNS_SCOPE_UNSPECIFIED"
  private_cluster_ipv4_cidr    = "XXX"
  binary_authorization_enabled = true
  binary_authorization         = "DISABLED"
}

  • projects/test-01234/gke/test/aaa-node-pool/terragrunt.hcl

(bbb-node-pool setting is similar to aaa-node-pool)

terraform {
  source = "${get_path_to_repo_root()}/modules/google_container_node_pool"
}

include {
  path = find_in_parent_folders()
}

dependency "cluster" {
  config_path  = "../cluster"
  skip_outputs = true
}

inputs = {
  cluster_name      = "test"
  cluster_location  = "asia-east1-b"
  cluster_version   = "1.26.5-gke.1200"
  node_pool_name    = "aaa-node-pool"
  node_count        = 1
  node_machine_type = "e2-small"
  node_disk_size    = 100
  node_disk_type    = "pd-standard"
  node_image_type   = "COS_CONTAINERD"
  node_oauth_scopes = [
    "https://www.googleapis.com/auth/devstorage.read_only",
    "https://www.googleapis.com/auth/logging.write",
    "https://www.googleapis.com/auth/monitoring",
    "https://www.googleapis.com/auth/service.management.readonly",
    "https://www.googleapis.com/auth/servicecontrol",
    "https://www.googleapis.com/auth/trace.append"
  ]
  node_tags                        = []
  node_taint_enabled               = false
  node_taint_key                   = ""
  node_taint_value                 = ""
  node_taint_effect                = ""
  auto_repair                      = true
  auto_upgrade                     = false
  upgrade_max_surge                = 1
  upgrade_max_unavailable          = 0
  upgrade_strategy                 = "SURGE"
  autoscaling_enabled              = false
  autoscaling_max_node_count       = 0
  autoscaling_min_node_count       = 0
  autoscaling_total_max_node_count = 0
  autoscaling_total_min_node_count = 0
}

Question

Question 1:If I want to adjust the name of bbb-node-pool, I need to manually modify the name of bbb-node-pool under test and the node_pool_name in the terragrunt.hcl file, use terragrunt run-all plan , it will add ccc-node-pool, but bbb-node-pool will not be removed, and the state files on gcs will be retained

Question 2: I want to destroy the resources of ccc-node-pool, but I don’t know how to remove them individually. I use terragrunt run-all destroy to remove all resources.

Question 3: Question 2, if I directly delete the file ccc-node-pool folder, and then use terragrunt run-all plan, it will not detect that the ccc-node-pool service has been removed.

Question 4: After deleting the resource, the folder on gcs still exists. Although there will be no data in default.tfstate, the experience of using it is not very good


Expected behavior I think it should be the same as terraform, which can automatically check whether there is a lack of configuration files when planning, and execute the destruction first. It may be that my planning method is different from the official one. I hope that some users can provide my opinions and ideas, thank you


Versions

  • Terragrunt version: v0.48.1
  • Terraform version: v1.5.3
  • Environment details (Ubuntu 20.04, Windows 10, etc.): MacOS darwin_amd64

880831ian avatar Jul 28 '23 06:07 880831ian