terraform-provider-digitalocean
terraform-provider-digitalocean copied to clipboard
Add kubectl credentials renewal.
Terraform Version
Terraform v0.11.13
Affected Resource(s)
- digitalocean_kubernetes_cluster
Expected Behavior
Have the credentials of the Kubernetes cluster automatically renewed
Actual Behavior
The credentials expires and are not renewed automatically
Steps to Reproduce
- Create a cluster
- Wait +7 days.
References
-
doctl
automatically configures kubectl to retrieve the new credentials with this code here
So I implemented this with:
resource "null_resource" "kubeconfig" {
provisioner "local-exec" {
command = "doctl k cluster kubeconfig save ${digitalocean_kubernetes_cluster.this.name}"
}
triggers = {
cluster_config = element(digitalocean_kubernetes_cluster.this.kube_config[*], 0).raw_config
}
}
However, since raw_config
does not guarantee key order, this can cause a change in every terraform plan
run.
That's a cool one, right now I'm using:
data "external" "script" {
program = ["k8s-resources/script.sh", "${digitalocean_kubernetes_cluster.api.name}", "${var.do_region}"]
depends_on = ["kubernetes_service.api"]
}
It's a script as I need to add resources not supported by kubernetes
provider (yet), and the script has
doctl kubernetes cluster kubeconfig save $CLUSTER_NAME &> out.log
kubectl config use-context $CONTEXT_NAME &> out.log
To add the information to kubectl
.
But I think this can be done automatically by the provider, just an idea.
Support for calling out to an exec-credential plugin was added into the Kubernetes provider itself:
provider "kubernetes" {
load_config_file = "false"
host = digitalocean_kubernetes_cluster.foo.endpoint
cluster_ca_certificate = base64decode(
digitalocean_kubernetes_cluster.foo.kube_config[0].cluster_ca_certificate
)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "doctl"
args = ["kubernetes", "cluster", "kubeconfig", "exec-credential",
"--version=v1beta1", digitalocean_kubernetes_cluster.foo.id]
}
}