terraform-linode-k8s icon indicating copy to clipboard operation
terraform-linode-k8s copied to clipboard

Add destroy lifecycle on worker node deletion

Open cliedeman opened this issue 7 years ago • 1 comments

provisioner "local-exec" {
    when = "destroy"
    command = <<EOF
export KUBECONFIG=${path.module}/secrets/admin.conf
kubectl drain --delete-local-data --force --ignore-daemonsets ${self.name}
kubectl delete nodes/${self.name}
EOF
}

Don't forget to adapt export KUBECONFIG=

cliedeman avatar Sep 22 '18 13:09 cliedeman

This is a little trickier because a node can't drain itself currently:

core@default-node-3 ~ $ sudo KUBECONFIG=/etc/kubernetes/kubelet.conf kubectl drain --delete-local-data --force --ignore-daemonsets default-node-3
node/default-node-3 cordoned
error: unable to drain node "default-node-3", aborting command...

There are pending nodes to be drained:
 default-node-3
error: daemonsets.extensions "calico-node" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system": calico-node-qsvlg; daemonsets.extensions "csi-linode-node" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system": csi-linode-node-fwrmv; daemonsets.extensions "kube-proxy" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "kube-system": kube-proxy-v5qzc; daemonsets.extensions "container-linux-update-agent" is forbidden: User "system:node:default-node-3" cannot get resource "daemonsets" in API group "extensions" in the namespace "reboot-coordinator": container-linux-update-agent-rxl4z

Remote-exec: Local SSH Agent forwarding allows for the node to ssh to the master to issue the command, but the nodes currently don't know the master via terraform variables. The nodes can get the API server using kubectl commands or by parsing the kube config files.

Local-exec: If we can rely on the outputted Kube config file to exist in the terraform workspace after it was initially created, then we could use "local-exec" and the local kubectl commands to drain the nodes.

displague avatar Mar 28 '19 20:03 displague