terraform-aws-eks-cluster icon indicating copy to clipboard operation
terraform-aws-eks-cluster copied to clipboard

"aws-auth" is forbidden: User "system:anonymous" cannot get resource

Open Dmitry1987 opened this issue 2 years ago • 0 comments

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When creating a cluster got this error:

│ Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

Expected Behavior

Config map access by the terraform user who makes the cluster should be enabled by default, but it seems something happens and it's not enabled.

Steps to Reproduce

Steps to reproduce the behavior: Run cluster creation with this config:


module "eks_cluster" {
  source             = "cloudposse/eks-cluster/aws"
  region             = var.aws-region
  kubernetes_version = var.k8s-version
  vpc_id            = aws_vpc.k8s-vpc.id
  subnet_ids        = aws_subnet.k8s-private[*].id
  service_ipv4_cidr = var.cluster-services-ip-range
  allowed_cidr_blocks           = ["10.0.0.0/16"]
  allowed_security_group_ids    = [aws_security_group.allow-vpn-and-ci.id]
  associated_security_group_ids = [aws_security_group.allow-vpn-and-ci.id]
  endpoint_private_access = true
  endpoint_public_access  = true
  enabled_cluster_log_types    = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  cluster_log_retention_period = 14
  map_additional_iam_roles = [
    {
      rolearn  = "arn:aws:iam::xxxxxxxxxxxxxx:role/Devops"
      username = "devops"
      groups   = ["system:masters"]
    }
  ]
  map_additional_iam_users = [
    {
      userarn  = "arn:aws:iam::xxxxxxxxxxxxxx:user/Someusername"
      username = "devops"
      groups   = ["system:masters"]
    }
  ]
  addons = [{
    addon_name               = "aws-ebs-csi-driver"
    addon_version = "v1.11.2-eksbuild.1"
    resolve_conflicts = "OVERWRITE"
    service_account_role_arn = aws_iam_role.ebs-driver.arn
  }]
  create_eks_service_role   = true
  oidc_provider_enabled     = true
  apply_config_map_aws_auth = true
  context = module.label.context
}

versions:

Terraform v1.2.9
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.75.2
+ provider registry.terraform.io/hashicorp/helm v2.6.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.12.1
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.4.3
+ provider registry.terraform.io/hashicorp/tls v4.0.1

Notes

I created this cluster first with only private endpoint enabled like this:

  endpoint_private_access = true
  endpoint_public_access  = false

but then realized I'll need to deal with dnsmasq which I don't want to bother for staging cluster at the moment, so I turned on the public endpoint and ran again, got this error (in the previous run I got a failure in some other resources but cluster run passed as I understand, but failed now after I changed just the public endpoint to enabled), maybe in the previous run that failed, the cluster was also in progress and not really completed, so only now it reached this stage of aws-auth. In any case, I don't understand why the same user that runs creation process is not allowed to aws-auth? how is it possible, maybe someone encounter the same issue? 🙏

thanks.

Dmitry1987 avatar Sep 15 '22 14:09 Dmitry1987