terraform-aws-eks
terraform-aws-eks copied to clipboard
Managed EKS Node Groups with Launch Template - Missing Security Groups
I am trying to create EKS cluster using an external launch template and trying to add an additional security group as well to the worker nodes, but when I create the cluster, I see only one security group added to the node groups. (eks-cluster-sg-stage-tech-eks-cluster-1899388174)
Below Secuity Groups are missing:
- Node Security Group (Default SG to be added as part of EKS)
- Additional Security Group (Using to open port from VPC hosts)
Code
` module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.26.3"
cluster_name = local.eks_cluster_name
cluster_version = var.eks_cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
egress_all = {
description = "Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
# Resolve the AWS Load Balancer Controller Target Registration Issue
ingress_allow_access_from_control_plane = {
type = "ingress"
protocol = "tcp"
from_port = 9443
to_port = 9443
source_cluster_security_group = true
description = "Allow access from control plane to webhook port of AWS load balancer controller"
}
}
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
# Additional Security Groups to be attached
vpc_security_group_ids = [ aws_security_group.all_worker_mgmt.id ]
}
eks_managed_node_groups = {
nodegroup-1 = {
min_size = var.eks_min_worker_nodes
max_size = var.eks_max_worker_nodes
desired_size = var.eks_desired_worker_nodes
instance_types = [var.eks_worker_instance_type]
capacity_type = "ON_DEMAND"
#disk_size = var.eks_worker_root_volume_size
create_launch_template = false
launch_template_name = aws_launch_template.eks_launch_template.name
launch_template_version = aws_launch_template.eks_launch_template.latest_version
}
}
# Creates OIDC Provider to Enable IRSA (IAM Role for Service Account)
enable_irsa = true
# https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
cluster_enabled_log_types = ["api", "audit","controllerManager","scheduler","authenticator"]
tags = local.tags
} `
When using a custom launch template, users have to specify the security groups that should be attached to nodes in the template.
@bryantbiggs : I tried that as well, but if I do it, Nodes are not getting attached to the node group, as it replaces all other EKS required SGs. How do I add all the SG ? If I am not wrong there should be 2 default security groups attached to any worker node(one is cluster SG and another is Node SG)
network_interfaces { associate_public_ip_address = false delete_on_termination = true security_groups = [aws_security_group.all_worker_mgmt.id] }
you need to list ALL security groups to attach in the launch template, its not additive when using custom launch template
@bryantbiggs : I am creating the launch template well before EKS cluster and its components are created. So the SGs will be created along with the EKS Cluster. I dont think it is possible to attach them to the Launch template. Am I missing something here ?
Thats correct - its a chicken vs the egg scenario. Either you create the security groups externally and attach them to your template, or you utilize the functionality provided by the module's template.
Any particular reason why the module's custom launch template doesn't work for your situation?
@bryantbiggs : If I use the inbuilt template, I am not able to add a ssh key pair to ssh into the hosts. If I use the block "remote_access", it throws error, which is clearly not supported. Is there a way to attach key pair to the nodes, that should solve my issue.
Yes, remote_access
is only available when using the default launch template created by the EKS managed node group service https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/eks_managed_node_group/main.tf#L138-L142
You should be able to specify key_name
in the node group to add the pem key. Also, I would look at using SSM session manager instead of PEM keys. Adding the arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
policy to the nodes will allow access via SSM since the SSM agent is installed by default on EKS optimized AMIs
@bryantbiggs : So I missed the point. We ae talking about two Launch Templates here, One is provided by eks terraform module(with this we can't use "remote_access" block), but if we use EKS managed node group service provided aunch template we can use the "remote_access" block. Are there any major difference between these 2 templates ?
@bryantbiggs : I tried with below code and again none of the security groups got attached other than ks-cluster-sg-stage-tech-eks-cluster-1899388174, which allows traffic from EKS control plane. It is creating below 2 security groups as well, but not attaching anywhere.
- stage-tech-eks-cluster-cluster
- stage-tech-eks-cluster-node
eks_managed_node_groups = {
nodegroup-1 = {
create_launch_template = false
launch_template_name = ""
min_size = var.eks_min_worker_nodes
max_size = var.eks_max_worker_nodes
desired_size = var.eks_desired_worker_nodes
instance_types = [var.eks_worker_instance_type]
capacity_type = "ON_DEMAND"
block_device_mappings = {
xvda = {
device_name = "/dev/xvda"
ebs = {
volume_size = var.eks_worker_root_volume_size
volume_type = var.eks_worker_root_volume_type
delete_on_termination = true
}
}
}
remote_access = {
ec2_ssh_key = var.eks_worker_key_pair
source_security_group_ids = [aws_security_group.bastion_sg.id]
}
}


Please see the AWS documentation related to using the EKS managed node group default launch template as well as the module's documentation on security group design considerations https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/network_connectivity.md#security-groups
Closing for now since everything appears to be working as intended per those two sources
I'm going to lock this issue because it has been closed for 30 days β³. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.