terraform-metal-multiarch-k8s icon indicating copy to clipboard operation
terraform-metal-multiarch-k8s copied to clipboard

Deleting a node via Terraform should delete the Kubernetes node

Open displague opened this issue 3 years ago • 2 comments

When the node count is modified, a destroy triggered provisioned should attempt to drain and delete the node:

for example:

resource "metal_device" "..." {
  // ...
  provisioner "local-exec" {
    when        = destroy
    command = "kubectl -f ${kubeconfig}  delete node ${self.hostname}" // we would want to drain/cordon first. can we get the kubeconfig path in this block?
  }
}

The CCM (#64) would handle the eventual cleanup of deleted Terraform or UI deleted nodes, but this approach will allow for Terraform deleted nodes to be cleaned up less abruptly.

displague avatar Apr 29 '21 13:04 displague

I like this idea a lot. This should be doable-- in other locations, we've assumed the kubeadm admin kubeconfig (/etc/kubernetes/admin.conf), so I don't see why we couldn't here.

jmarhee avatar Apr 29 '21 19:04 jmarhee

#90 addresses this in a best-effort (makes an attempt, reminds user to clean up manually if KUBECONFIG is not set, because a destroy triggered resource can't consume variables in such a way to allow me to push in a path) using local-exec; perhaps something the Kubernetes provider can address (though if I recall, it cannot delete a node).

jmarhee avatar Jun 07 '21 17:06 jmarhee