terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Error `The configmap "aws-auth" does not exist` when deploying to AWS EKS
Terraform Version, Provider Version and Kubernetes Version
Terraform v1.1.9
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.75.1
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/external v2.2.2
+ provider registry.terraform.io/hashicorp/kubernetes v2.11.0
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.1.3
+ provider registry.terraform.io/hashicorp/tls v3.4.0
eks module ~> 18.0
Affected Resource(s)
Terraform Configuration Files
This is my .tf file:
data "aws_eks_cluster" "default" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "default" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", var.cluster_name, "--profile", var.customer-var.environment]
command = "aws"
}
# token = data.aws_eks_cluster_auth.default.token
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 18.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
cluster_encryption_config = [{
provider_key_arn = var.kms_key_id
resources = ["secrets"]
}]
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
disk_size = 50
instance_types = ["c5.large"]
}
eks_managed_node_groups = {
"${var.ng1_name}" = {
min_size = var.ng1_min_size
max_size = var.ng1_max_size
desired_size = var.ng1_desired_size
instance_types = var.ng1_instance_types
capacity_type = "ON_DEMAND"
update_config = {
max_unavailable_percentage = 50
}
tags = var.tags
}
}
node_security_group_additional_rules = var.ng1_additional_sg_rules
# aws-auth configmap
manage_aws_auth_configmap = true
tags = var.tags
}
Debug Output
Panic Output
│ Error: The configmap "aws-auth" does not exist
│
│ with module.eks-cluster.module.eks.kubernetes_config_map_v1_data.aws_auth[0],
│ on .terraform/modules/eks-cluster.eks/main.tf line 431, in resource "kubernetes_config_map_v1_data" "aws_auth":
│ 431: resource "kubernetes_config_map_v1_data" "aws_auth" {
Steps to Reproduce
Expected Behavior
EKS Cluster and nodegroup deployment
Actual Behavior
Cluster deploys, but nodegroups are not created nor registered to the cluster
Important Factoids
Deploying to AWS EKS
References
- https://github.com/terraform-aws-modules/terraform-aws-eks/issues/2009
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Same here
Hi @dm-sumup,
This issue doesn't seem to be related to the provider itself, but the community module that uses this provider. It looks like there is a big discussion in the module issue tracker. Please follow up with the discussion to address the issue.
Thank you.
Same here
(Seems it tries to create aws-auth
which already exists )
https://githubhot.com/repo/terraform-aws-modules/terraform-aws-eks/issues/2075
Dirty override that works for me:
- Create EKS with
manage_aws_auth_configmap = false
andcreate_aws_auth_configmap = false
- Then, after creation, change
manage_aws_auth_configmap
totrue
and set what you want.
I get the same error.
Edit:
It's a race condition issue, we managed to fix it by completely managing our own aws-auth
config map and setting manage_aws_auth_configmap
and create_aws_auth_configmap
to false.
Same
I get the same error.
IMO, this bug will be pretty hard to fix. The default cluster is created with aws-auth. At the same moment, the appropriate object for aws-auth should appear in the terraform state. (That is, as if an implicit terraform import has been executed). But terraform does not support such things as well as changing the plan on the fly.
For anyone landing here - this issue is not related to the Kubernetes provider
- EKS Managed node groups and Fargate profiles will automatically create an aws-auth configmap in the cluster if one does not exist which can lead to race conditions. In the
terraform-aws-eks
module we have provided both themanage_aws_auth_configmap
andcreate_aws_auth_configmap
because of this (and for backwards compatibility support). If you are creating a new cluster, you *should be ok with setting both of these to true. *HOWEVER - please understand that it is not foolproof and there is a race condition. If the EKS managed node group or Fargate profile create the configmap before Terraform, it will fail with the error message that the configmap already exists. Conversely, if you only havemanage_aws_auth_configmap
and are relying on EKS managed node group or Fargate profiles to create the configmap, you most likely will see the error message about the congifmap not existing yet - There isn't anything else that can be done at this time to resolve these issues unfortunately. We are all waiting for the next iteration of cluster role management which will alleviate these race conditions and ownership issues
In short:
- If you are creating a net new cluster, you should be safe with setting
manage_aws_auth_configmap = true
andcreate_aws_auth_configmap = true
- If you are creating a cluster with only self-managed node groups, you MUST set
manage_aws_auth_configmap = true
andcreate_aws_auth_configmap = true
because one will NOT be automatically created for you - If you know that you have an existing configmap already in the cluster, only use
manage_aws_auth_configmap
- There isn't anything else that can be done at this time to resolve these issues unfortunately. We are all waiting for the next iteration of cluster role management which will alleviate these race conditions and ownership issues
Looking at the Kubernetes APIs, if force = true
on kubernetes_config_map_v1_data
resource, shouldn't the provider first be trying to do a GET /api/v1/namespaces/{namespace}/configmaps/{name}
then if a 404
do a POST /api/v1/namespaces/{namespace}/configmaps/{name}
. If the GET
is a 200
it should then do a PUT /api/v1/namespaces/{namespace}/configmaps/{name}
?
Isn't that the whole point of the force = true
on kubernetes_config_map_v1_data
?
For anyone landing here - this issue is not related to the Kubernetes provider
- EKS Managed node groups and Fargate profiles will automatically create an aws-auth configmap in the cluster if one does not exist which can lead to race conditions. In the
terraform-aws-eks
module we have provided both themanage_aws_auth_configmap
andcreate_aws_auth_configmap
because of this (and for backwards compatibility support). If you are creating a new cluster, you *should be ok with setting both of these to true. *HOWEVER - please understand that it is not foolproof and there is a race condition. If the EKS managed node group or Fargate profile create the configmap before Terraform, it will fail with the error message that the configmap already exists. Conversely, if you only havemanage_aws_auth_configmap
and are relying on EKS managed node group or Fargate profiles to create the configmap, you most likely will see the error message about the congifmap not existing yet- There isn't anything else that can be done at this time to resolve these issues unfortunately. We are all waiting for the next iteration of cluster role management which will alleviate these race conditions and ownership issues
In short:
- If you are creating a net new cluster, you should be safe with setting
manage_aws_auth_configmap = true
andcreate_aws_auth_configmap = true
- If you are creating a cluster with only self-managed node groups, you MUST set
manage_aws_auth_configmap = true
andcreate_aws_auth_configmap = true
because one will NOT be automatically created for you- If you know that you have an existing configmap already in the cluster, only use
manage_aws_auth_configmap
Tried this route and still came up with the same error on creating a net new EKS cluster 😭
set both manage_aws_auth_configmap
and create_aws_auth_configmap
to false during EKS creation, after creation successful, set manage_aws_auth_configmap
to true, still same issue, and check the EKS cluster, the aws_auth already exist
For anyone landing here - this issue is not related to the Kubernetes provider
- EKS Managed node groups and Fargate profiles will automatically create an aws-auth configmap in the cluster if one does not exist which can lead to race conditions. In the
terraform-aws-eks
module we have provided both themanage_aws_auth_configmap
andcreate_aws_auth_configmap
because of this (and for backwards compatibility support). If you are creating a new cluster, you *should be ok with setting both of these to true. *HOWEVER - please understand that it is not foolproof and there is a race condition. If the EKS managed node group or Fargate profile create the configmap before Terraform, it will fail with the error message that the configmap already exists. Conversely, if you only havemanage_aws_auth_configmap
and are relying on EKS managed node group or Fargate profiles to create the configmap, you most likely will see the error message about the congifmap not existing yet- There isn't anything else that can be done at this time to resolve these issues unfortunately. We are all waiting for the next iteration of cluster role management which will alleviate these race conditions and ownership issues
In short:
- If you are creating a net new cluster, you should be safe with setting
manage_aws_auth_configmap = true
andcreate_aws_auth_configmap = true
- If you are creating a cluster with only self-managed node groups, you MUST set
manage_aws_auth_configmap = true
andcreate_aws_auth_configmap = true
because one will NOT be automatically created for you- If you know that you have an existing configmap already in the cluster, only use
manage_aws_auth_configmap
This isn't working on an existing cluster (with existing configmap) with EKS managed node groups.
same problem.
New fresh cluster create with docker image terraform:light and option:
create_aws_auth_configmap = true
manage_aws_auth_configmap = true
crash with error:
module.infrastructure.module.eks.kubernetes_config_map.aws_auth[0]: Creating...
╷
│ Error: configmaps "aws-auth" already exists
│
│ with module.infrastructure.module.eks.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/infrastructure.eks/main.tf line 536, in resource "kubernetes_config_map" "aws_auth":
│ 536: resource "kubernetes_config_map" "aws_auth" {
New fresh cluster create with terraform installed on local workstation (no docker) and I dont have any problem.
It's crazy
just to re-iterate: https://github.com/hashicorp/terraform-provider-kubernetes/issues/1720#issuecomment-1266937679
https://github.com/aws/containers-roadmap/issues/185 is the solution that will properly address the issues listed on this thread
Hey guys! I think it looks like a lot of work to do here. terraform module has to do the following:
- the alternative call of "aws eks update-config --region yy--name xx"
- put the kubeconfig in a temp file
- run k -n kube-system patch cm aws-auth -p '{"data": { "mapUsers":{.....}}}'
- Remove temp file kubeconfig or keep it in
terraform output
aws_auth could not be created when : create_aws_auth_configmap = true manage_aws_auth_configmap = true
I'm using hashicorp/kubernetes ~> 2.26.0 alekc/kubectl ~> 2.0.4 aws ~> 5.0 terraform version : 1.6.4 / 1.5.3 both