terraform-aws-eks-blueprints
terraform-aws-eks-blueprints copied to clipboard
[Bug]: using "module.eks_blueprints.eks_cluster_id" fails when refreshing "module.eks_blueprints.kubernetes_config_map.aws_auth[0]"
Welcome to Amazon EKS Blueprints!
- [X] Yes, I've searched similar issues on GitHub and didn't find any.
Amazon EKS Blueprints Release version
4.4.0
What is your environment, configuration and the example used?
I am trying to run this on my local machine which is a mac and github actions on ubuntu.
data "aws_eks_cluster" "cluster" {
name = module.eks_blueprints.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks_blueprints.eks_cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
What did you do and What did you see instead?
Hi I am able to successfully create an eks cluster on my first run. However when I try to do a terraform destroy I am getting an error during the step module.eks_blueprints.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]. The error message is:
│ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused
│
│ with module.eks_blueprints.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/eks_blueprints/aws-auth-configmap.tf line 1, in resource "kubernetes_config_map" "aws_auth":
│ 1: resource "kubernetes_config_map" "aws_auth" {
I noticed that when I hard code my cluster name with aws_eks_cluster and aws_eks_cluster_auth, the terraform destroy will be successful.
Additional Information
No response
Hi @ecs-jnguyen in order to properly destroy all resources, the guidance is to use -target and destroy them in reverse order such as shown here.
hi @askulkarni2 I tried using terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve first and it still failed with the same error for me.
Try adding the gavinbunney/kubectl provider and running terraform init -upgrade
Try your destroy again.
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.14"
}
provider "kubectl" {
apply_retry_count = 3
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks_blueprints.eks_cluster_id]
}
}
I don't think its the provider thats the issue for me. When I hard code my eks cluster name like below instead of using module.eks_blueprints.eks_cluster_id the destroy works okay.
data "aws_eks_cluster" "cluster" {
name = "my_eks_cluster_name"
}
data "aws_eks_cluster_auth" "cluster" {
name = "my_eks_cluster_name"
}
I was sure I had the same issue on destroy this am and through testing, copied the kubectl provider block from an eariler working setup and it fixed the issue. Maybe a red herring.
I was sure I had the same issue on destroy this am and through testing, copied the kubectl provider block from an eariler working setup and it fixed the issue. Maybe a red herring.
I can give this a try though. Did you remove the provider "kubernetes" block when you used this one?
No that was left in. Was also in my working example so left it in.
Edit:
I feel it's the load_config_file = false that's forcing terraform to renew it's short-lived auth to aws
Unfortunately I ran into the same issue with using provider "kubectl".
module.eks_blueprints.eks_cluster_id
If you create an output for module.eks_blueprints.eks_cluster_id; do a terraform refresh then terraform output, do you see the name you expected?
Same issue , did you figure out a fix?
@tanvp112 yes I see the name that I am expecting.
@dhf22 I had a workaround where I had to hard code the name of the cluster instead.
closing for now but please feel free to respond or open a new issue if the problem still persists