terraform-provider-digitalocean icon indicating copy to clipboard operation
terraform-provider-digitalocean copied to clipboard

Kubernetes cluster is recreate but it should have recreate only node_pool

Open kitimark opened this issue 3 years ago • 1 comments

Terraform Version and Provider Version

Terraform v0.13.2
+ provider registry.terraform.io/digitalocean/digitalocean v1.22.2

Affected Resource(s)

  • digitalocean_kubernetes_cluster

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

variable "do_token" {}

provider "digitalocean" {
  token = var.do_token
}

resource "digitalocean_kubernetes_cluster" "k8s-mark-cluster" {
  name = "k8s-mark-cluster"
  region = "sgp1"
  version = "1.17.11-do.0"
  surge_upgrade = true

  node_pool {
    name = "pool-main"
    size = "s-2vcpu-4gb" # Upgrade instance size
    node_count = 2
  }
}

output "cluster-id" {
  value = digitalocean_kubernetes_cluster.k8s-mark-cluster.id
}

Expected Behavior

What should have happened? It should have recreate only node_pool.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.k8s-mark-cluster will be updated in-place
  ~ resource "digitalocean_kubernetes_cluster" "k8s-mark-cluster" {
        auto_upgrade   = false
        cluster_subnet = "10.244.0.0/16"
        created_at     = "2020-09-09 09:20:00 +0000 UTC"
        endpoint       = "<censored>"
        id             = "<censored>"
        ipv4_address   = "<censored>"
        kube_config    = (sensitive value)
        name           = "k8s-mark-cluster"
        region         = "sgp1"
        service_subnet = "10.245.0.0/16"
        status         = "running"
        surge_upgrade  = true
        tags           = []
        updated_at     = "2020-09-09 09:46:53 +0000 UTC"
        version        = "1.17.11-do.0"
        vpc_uuid       = "<censored>"

      ~ node_pool {
          ~ actual_node_count = 1 -> (known after apply)
            auto_scale        = false
          ~ id                = "<censored>" -> (known after apply)
            labels            = {}
            max_nodes         = 0
            min_nodes         = 0
            name              = "pool-main"
            node_count        = 1
          ~ nodes             = [
              - {
                  - created_at = "2020-09-09 09:20:00 +0000 UTC"
                  - droplet_id = "<censored>"
                  - id         = "<censored>"
                  - name       = "pool-main-3a5fl"
                  - status     = "running"
                  - updated_at = "2020-09-09 09:25:05 +0000 UTC"
                },
            ] -> (known after apply)
           ~ size              = "s-1vcpu-2gb" -> "s-2vcpu-4gb"
            tags              = []
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Actual Behavior

What actually happened? I try to apply this config. it will destroy cluster and create new cluster.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # digitalocean_kubernetes_cluster.k8s-mark-cluster must be replaced
-/+ resource "digitalocean_kubernetes_cluster" "k8s-mark-cluster" {
      - auto_upgrade   = false -> null
      ~ cluster_subnet = "10.244.0.0/16" -> (known after apply)
      ~ created_at     = "2020-09-09 09:20:00 +0000 UTC" -> (known after apply)
      ~ endpoint       = "<censored>" -> (known after apply)
      ~ id             = "<censored>" -> (known after apply)
      ~ ipv4_address   = "<censored>" -> (known after apply)
      ~ kube_config    = (sensitive value)
        name           = "k8s-mark-cluster"
        region         = "sgp1"
      ~ service_subnet = "10.245.0.0/16" -> (known after apply)
      ~ status         = "running" -> (known after apply)
        surge_upgrade  = true
      - tags           = [] -> null
      ~ updated_at     = "2020-09-09 09:46:53 +0000 UTC" -> (known after apply)
        version        = "1.17.11-do.0"
      ~ vpc_uuid       = "<censored>" -> (known after apply)

      ~ node_pool {
          ~ actual_node_count = 1 -> (known after apply)
            auto_scale        = false
          ~ id                = "<censored>" -> (known after apply)
          - labels            = {} -> null
          - max_nodes         = 0 -> null
          - min_nodes         = 0 -> null
            name              = "pool-main"
            node_count        = 1
          ~ nodes             = [
              - {
                  - created_at = "2020-09-09 09:20:00 +0000 UTC"
                  - droplet_id = "<censored>"
                  - id         = "<censored>"
                  - name       = "pool-main-3a5fl"
                  - status     = "running"
                  - updated_at = "2020-09-09 09:25:05 +0000 UTC"
                },
            ] -> (known after apply)
          ~ size              = "s-1vcpu-2gb" -> "s-2vcpu-4gb" # forces replacement
          - tags              = [] -> null
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Steps to Reproduce

  1. terraform plan

Important Factoids

  • Digitalocean kubernetes

References

none

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

kitimark avatar Sep 09 '20 20:09 kitimark

Related to #424

morgangrubb avatar Jan 20 '21 18:01 morgangrubb