terraform-google-kubernetes-engine icon indicating copy to clipboard operation
terraform-google-kubernetes-engine copied to clipboard

Module outputs in node_pool cause plan to fail

Open gtear opened this issue 3 years ago • 0 comments

TL;DR

Terraform can't create a plan when resource outputs are used in the node_pools list(map(string)). This is because the module uses the name attribute to create a map(map(string)) and the map isn't fully known at plan time if any keys in the map are unknown.

Why is node_pools a list(map(string)) and not a map(map(string)) when the module converts the list(map(string)) to a map(map(string)) anyway?

Expected behavior

For terraform to create the plan - which it can do if no resource outputs are in the map.

Observed behavior

│ Error: Invalid for_each argument │ │ on .terraform/modules/testinfrastructure.main.infrastructure.fullcluster.gke/modules/private-cluster/cluster.tf line 300, in resource "google_container_node_pool" "pools": │ 300: for_each = local.node_pools │ ├──────────────── │ │ local.node_pools is a map of map of string, known only after apply │ │ The "for_each" map includes keys derived from resource attributes that │ cannot be determined until apply, and so Terraform cannot determine the │ full set of keys that will identify the instances of this resource. │ │ When working with unknown values in for_each, it's better to define the map │ keys statically in your configuration and place apply-time results only in │ the map values. │ │ Alternatively, you could use the -target planning option to first apply │ only the resources that the for_each value depends on, and then apply a │ second time to fully converge. ╵

Terraform Configuration

node_pools = [
    {
      name                      = "default-node-pool"
      machine_type              = var.default.instancetype
      node_locations            = "europe-west2-a"
      min_count                 = var.default.minnodes
      max_count                 = var.default.maxnodes
      local_ssd_count           = 0
      disk_size_gb              = var.default.instancedisksize
      disk_type                 = "pd-standard"
      image_type                = "COS_CONTAINERD"
      enable_gcfs               = false
      auto_repair               = true
      auto_upgrade              = true
      service_account           = local.derived_service_email #module.gke_service_account.email
      preemptible               = false
      initial_node_count        = var.default.minnodes
    },
    {
      name                      = "datasci-node-pool"
      machine_type              = var.sci.instancetype
      node_locations            = "europe-west2-a"
      min_count                 = var.sci.minnodes
      max_count                 = var.sci.maxnodes
      local_ssd_count           = 0
      disk_size_gb              = var.sci.instancedisksize
      disk_type                 = "pd-standard"
      image_type                = "COS_CONTAINERD"
      enable_gcfs               = false
      auto_repair               = true
      auto_upgrade              = true
# change the service account here to be the output of a service account resource to reproduce the error. Specifically this has to be on the non-default node pool to reproduce the error.
      service_account           = local.derived_service_email #module.gke_service_account.email
      preemptible               = false
      initial_node_count        = var.sci.minnodes
    }
  ]

Terraform Version

1.2.5

Additional information

Documentation about this would be fine - the work around is to use depends_on and define everything statically.

gtear avatar Aug 21 '22 07:08 gtear