terraform-provider-digitalocean
terraform-provider-digitalocean copied to clipboard
`terraform destroy` command on a Kubernetes cluster doesn't remove Droplets in its Nodepool
Bug Report
Describe the bug
Issuing a terraform destroy command on a "digitalocean_kubernetes_cluster" resource which doesn't have additional Nodepools removes the cluster, but the Droplets in the default Nodepool were not removed.
I found this behavior from the fact that creating the cluster took 5m30s but for destroying it 1s was enough.
After checking out the DO console, I found out that the Droplets were not removed and still running.
Affected Resource(s)
- digitalocean_kubernetes_cluster
Expected Behavior
Upon issuing a terraform destory command on the cluster's resource, I expected the Droplets which were in the default Nodepool should be removed along with the cluster.
Actual Behavior
The Droplets in the default Nodepool were not removed and still running.
Steps to Reproduce
terraform applyincluding the resource declaration below:
resource "digitalocean_kubernetes_cluster" "cluster" {
name = "uniglot-cluster-abc"
region = "sgp1"
version = "1.26.3-do.0"
node_pool {
name = "worker-pool"
size = "s-1vcpu-2gb"
auto_scale = true
min_nodes = 3
max_nodes = 5
}
}
which should work well.
terraform destroyand the generated plan will be similar to:
digitalocean_kubernetes_cluster.cluster: Refreshing state... [id=c8beca30-502d-4d4d-807c-256bedff2e91]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# digitalocean_kubernetes_cluster.cluster will be destroyed
- resource "digitalocean_kubernetes_cluster" "cluster" {
- auto_upgrade = false -> null
- cluster_subnet = "10.244.0.0/16" -> null
- created_at = "2023-05-19 06:38:28 +0000 UTC" -> null
- endpoint = "https://c8beca30-502d-4d4d-807c-256bedff2e91.k8s.ondigitalocean.com" -> null
- ha = false -> null
- id = "c8beca30-502d-4d4d-807c-256bedff2e91" -> null
- kube_config = (sensitive value) -> null
- name = "uniglot-cluster-abc" -> null
- region = "sgp1" -> null
- registry_integration = false -> null
- service_subnet = "10.245.0.0/16" -> null
- status = "running" -> null
- surge_upgrade = true -> null
- tags = [] -> null
- updated_at = "2023-05-19 06:43:57 +0000 UTC" -> null
- urn = "do:kubernetes:c8beca30-502d-4d4d-807c-256bedff2e91" -> null
- version = "1.26.3-do.0" -> null
- vpc_uuid = "f0a6ba4a-011a-4fa7-ac8e-57add95e8eca" -> null
- maintenance_policy {
- day = "any" -> null
- duration = "4h0m0s" -> null
- start_time = "0:00" -> null
}
- node_pool {
- actual_node_count = 3 -> null
- auto_scale = true -> null
- id = "94673e6d-066f-4626-bdde-6a546fe41f93" -> null
- labels = {} -> null
- max_nodes = 5 -> null
- min_nodes = 3 -> null
- name = "worker-pool" -> null
- node_count = 0 -> null
- nodes = [
- {
- created_at = "2023-05-19 06:38:28 +0000 UTC"
- droplet_id = "356101631"
- id = "6003a756-baa9-4b25-a962-8027fd529f2c"
- name = "worker-pool-frffl"
- status = "running"
- updated_at = "2023-05-19 06:40:39 +0000 UTC"
},
] -> null
- size = "s-1vcpu-2gb" -> null
- tags = [] -> null
}
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
digitalocean_kubernetes_cluster.cluster: Destroying... [id=c8beca30-502d-4d4d-807c-256bedff2e91]
digitalocean_kubernetes_cluster.cluster: Destruction complete after 1s
Destroy complete! Resources: 1 destroyed.
Terraform Configuration Files
# main.tf
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
variable "do_token" {}
provider "digitalocean" {
token = var.do_token
}
# do-secrets.tfvars
do_token = "your_do_token_12345"
Terraform version
Terraform v1.4.6
on darwin_arm64
+ provider registry.terraform.io/digitalocean/digitalocean v2.28.1
Debug Output
Panic Output
Additional context
- In resource_kubernetes_cluster.go, the function
resourceDigitalOceanKubernetesClusterDelete()usesclient.Kubernetes.Delete()(link)`, which corresponds to this API endpoint. The reference says that this endpoint is used to delete a Kubernetes cluster and all services deployed to it.
Important Factoids
References
Can someone confirm whether this is an expected behavior?
I assumed terraform destroy would also destroy the Droplets created via the required node_pool configuration as well.
Issuing a terraform destroy should indeed destroy the Droplets associated with the cluster's node pools. Though Terraform does not manage those Droplets directly. The destroy command makes an API request to the DigitalOcean Kubernetes Service's (DOKS) API. DOKS should then destroy the Droplets that are part of the cluster. This is an asynchronous operation that may take some time to complete.
Are these Droplets being left indefinitely or do they eventually go away?
@andrewsomething Let me try again in a few days and then I can tell you whether the Droplets eventually get destroyed.
I came to the same problem. My expectation that terraform destroy waits for node pool finish clear all droplets before exit.
In my case the droplet was there after 1 minute of kubernetes destroyed.