terraform-provider-kubectl
terraform-provider-kubectl copied to clipboard
failed to create kubernetes rest client for read of resource
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.14.0"
}
....
provider "kubectl" {
config_path = local_file.kube_config.filename
config_context = var.kube_config_context
load_config_file = false
}
Constantly get
Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp [::1]:80: connect: connection refused
This seems to only happen on destroy
I've tried both load_config_file = true / false to no avail (the option is confusing... what "path" does this pertain to? the default one on the system or the one in config_path?) is this mutually exclusive or not when used in combination with config_path?)
Getting this on apply as well. Doing research to see if it's related to https://support.hashicorp.com/hc/en-us/articles/4408936406803-Kubernetes-Provider-block-fails-with-connect-connection-refused-
I don't think though since the initial apply works fine and creates resources, just the state refresh fails 🤔
Getting the same issue here. It works normally and after some time it starting retrieving the same issue reported by @bitsofinfo
provider "kubectl" {
apply_retry_count = 5
host = aws_eks_cluster.this.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.this.certificate_authority[0].data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", aws_eks_cluster.this.id]
command = "aws"
}
}
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.14"
}
I'v switch to this provider - https://registry.terraform.io/providers/alekc/kubectl/latest/docs