terraform-aws-eks icon indicating copy to clipboard operation
terraform-aws-eks copied to clipboard

aws-auth configmap changes after adding a node pool removing existing roles

Open gabricc opened this issue 1 year ago β€’ 0 comments

Description

Adding a new nodeGroup to an existing EKS cluster changes the aws-auth in an undesired way, it removed existing roles for the configmap: image

  • [X] βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 19.21.0

  • Terraform version: 1.6.6

  • Provider version(s):
terraform providers -version
Terraform v1.6.6
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v5.32.1
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.3
+ provider registry.terraform.io/hashicorp/kubernetes v2.15.0
+ provider registry.terraform.io/hashicorp/random v3.6.0
+ provider registry.terraform.io/hashicorp/time v0.10.0
+ provider registry.terraform.io/hashicorp/tls v4.0.5

Reproduction Code

data "aws_subnets" "subnets" {
  filter {
    name   = "vpc-id"
    values = [module.network.vpc_id]
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 19.21.0"

  cluster_name    = "slang-eks-${local.environment}"
  cluster_version = "1.28"

  cluster_endpoint_public_access = true

  cluster_enabled_log_types              = []
  cloudwatch_log_group_retention_in_days = 1

  cluster_addons = {
    coredns = {
      preserve = true

      timeouts = {
        create = "25m"
        delete = "10m"
      }
    }
    kube-proxy = {
    }
    vpc-cni = {
      preserve = true
    }
  }

  vpc_id     = module.network.vpc_id
  subnet_ids = data.aws_subnets.subnets.ids

  node_security_group_enable_recommended_rules = false

  # Enable node to node communication
  node_security_group_additional_rules = {
    ingress_self_all = {
      description = "Node to node all ports/protocols"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "ingress"
      self        = true
    }
    egress_all = {
      description      = "Node all egress"
      protocol         = "-1"
      from_port        = 0
      to_port          = 0
      type             = "egress"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
    # Control plane to nodes
    ingress_cluster_to_node_all_traffic = {
      description                   = "Cluster API to Nodegroup all traffic"
      protocol                      = "-1"
      from_port                     = 0
      to_port                       = 0
      type                          = "ingress"
      source_cluster_security_group = true
    }
  }

  # EKS Managed Node Group(s)
  eks_managed_node_group_defaults = {
    disk_size      = 50
    instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
  }

  eks_managed_node_groups = {

    spot_ng_1 = {
      create_security_group = false

      subnet_ids = module.network.public_subnets

      min_size                = 0
      max_size                = 20
      desired_size            = 0

      instance_types = ["t3.xlarge", "t2.xlarge"]
      capacity_type  = "SPOT"

      enable_monitoring = false

      taints = {
        dedicated = {
          key    = "node-type"
          value  = "spot"
          effect = "NO_SCHEDULE"
        }
      }
    },

    spot_ng_2 = {
      create_security_group = false

      subnet_ids = module.network.public_subnets

      min_size                = 0
      max_size                = 20
      desired_size            = 1

      instance_types = ["t3.xlarge", "t3.large", "t3a.xlarge"]
      capacity_type  = "SPOT"

      enable_monitoring = false

      taints = {
        dedicated = {
          key    = "node-type"
          value  = "spot"
          effect = "NO_SCHEDULE"
        }
      }
    }
  }

  # Fargate Profile(s)
  fargate_profiles = {
    default = {
      name = "default"
      selectors = [
        {
          namespace = "default"
        }
      ]
      # Using specific subnets instead of the subnets supplied for the cluster itself
      subnet_ids = module.network.private_subnets
    }
  }

  # aws-auth configmap
  manage_aws_auth_configmap = true

  aws_auth_roles = [
    {
      rolearn  = "arn:aws:iam::****:role/eks-engineers-role"
      username = "eks-engineers-role"
      groups   = ["readonly"]
    },
  ]

  aws_auth_users = [
    {
      userarn  = "arn:aws:iam::***:user/g.carvalho"
      username = "user"
      groups   = ["system:masters", "system:masters", "system:bootstrappers", "system:nodes", "eks-console-dashboard-full-access-group"]
    },
  ]

  aws_auth_accounts = [
    "***"
  ]

  tags = merge(
    local.base_common_tags,
    local.tags,
    {
      component = "slang-${local.environment}-eks"
    }
  )
}

Steps to reproduce the behavior: Create a new nodeGroup inside the eks module, run terraform plan. I'm using terraform workspaces.

Expected behavior

A new nodeGroup should be created and the aws-auth should be changed to add the required roles for the new nodeGroup.

Actual behavior

aws-auth is changed in an undesired way, it removes existing roles from the configmap.

Terminal Output Screenshot(s)

Attached above ☝️

gabricc avatar Jan 12 '24 21:01 gabricc