terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Provider version 2.30.0 failed provider configuration on terraform plan
Attempted to upgrade from 2.29.0 to 2.30.0 and suddenly the provider is throwing errors on a terraform plan. Looking at the release notes here, I'm not seeing anything relevant to our current configuration that would produce this error. Simply downgrading back to 2.29.0 is sufficient as a workaround. Also worth noting, this only takes place during initial cluster creation. After a successful run on 2.29.0, I can upgrade to 2.30.0 and everything works fine.
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 17, in provider "kubernetes":
│ 17: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
Issue is only present on new builds, if I am running a terraform plan against an environment that already has a cluster, I get no error.
Terraform Version, Provider Version and Kubernetes Version
Terraform version: `1.8.4` (also tried `1.4.7`)
Kubernetes provider version: `2.30.0`
Kubernetes version: `1.29`
Affected Resource(s)
tf plan failing due to "invalid configuration" in provider
Terraform Configuration Files
- Provider definition
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.default.name, "--region", var.region]
env = {
AWS_PROFILE = var.profile
}
}
}
- Kube Config Template file
apiVersion: v1
clusters:
- cluster:
server: ${EKS_SERVICE_ENDPOINT}
certificate-authority-data: ${EKS_CA_DATA}
name: ${APPLICATION}-${ENVIRONMENT}
contexts:
- context:
cluster: ${APPLICATION}-${ENVIRONMENT}
user: ${APPLICATION}-${ENVIRONMENT}-admin
name: ${APPLICATION}-${ENVIRONMENT}-system
current-context: ${APPLICATION}-${ENVIRONMENT}-system
kind: Config
preferences: {}
users:
- name: ${APPLICATION}-${ENVIRONMENT}-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1
interactiveMode: IfAvailable
command: aws
args:
- "eks"
- "get-token"
- "--cluster-name"
- "${K8_CLUSTER_NAME}"
env:
- name: AWS_PROFILE
value: ${AWS_PROFILE}
- Cluster Data definition
data "aws_eks_cluster" "default" {
depends_on = [ module.EKS.EKS-Server-Endpoint ]
name = var.EKSClusterName
}
- EKS Module definition
module "EKS" {
source = "terraform-aws-modules/eks/aws"
version = "18.30.0"
...blah...
...blah...
...blah...
}
Expected Behavior
What should have happened? A successful plan to create an EKS cluster.
Actual Behavior
What actually happened?
│ Error: Provider configuration: cannot load Kubernetes client config
│
│ with provider["registry.terraform.io/hashicorp/kubernetes"],
│ on main.tf line 17, in provider "kubernetes":
│ 17: provider "kubernetes" {
│
│ invalid configuration: default cluster has no server defined
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment