terraform-aws-iam
terraform-aws-iam copied to clipboard
Attach load-balancer-controller role to assumed role
Is your request related to a new offering from AWS?
Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.
- No π: please wait to file a request until the functionality is available in the AWS provider
- Yes β : please list the AWS provider version which introduced this functionality
- Yes, I think eks 1.22.
Is your request related to a problem? Please describe.
So, for module "terraform-aws-modules/eks/aws" there is an eks cluster "cluster1", with this, I'm using "create_iam_role" and "enable_irsa" set to "true", which results in creating an assumable/assumed role "role1".
Describe the solution you'd like.
Later, for module "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks", I'm using "role_name=cluster-autoscaler-role" and "attach_load_balancer_controller_policy=true". But when the role "role1" is created the policies attached to "cluster-autoscaler-role" or the role itself is not attached to or I can't figure out how to attach it to "role1" so that when using the following annotations:
- service.beta.kubernetes.io/aws-load-balancer-name:
- service.beta.kubernetes.io/aws-load-balancer-type: external
- service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
- service.beta.kubernetes.io/aws-load-balancer-scheme: internal
The "role1" is no longer authorized to perform actions attached to "policy" for "cluster-autoscaler-role" to provision an alb.
Describe alternatives you've considered.
My understanding is that even though the "cluster-autoscaler-role" is getting created, it is not getting attached to the "assumed role" created in the "eks module".
Additional context
If more clarification is needed, I can provide @antonbabenko
Please fill out the template as it is provided - without seeing reproduction code its nearly impossible to help
@bryantbiggs Please check the following code.
eks-cluster.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
# eks module version
version = "18.21.0"
cluster_name = var.eks_cluster_name
# kubernetes version to use for eks cluster
cluster_version = var.kubernetes_version
# new in version "18.21.0"
enable_irsa = true
create_iam_role = true
create_cloudwatch_log_group = false
iam_role_use_name_prefix = false
# Run nodes in private subnets
# Auto-scaling nodes will run in private subnets
# nodes will have private-IPs
subnet_ids = module.vpc.private_subnets
# Indicates whether or not the Amazon EKS private API server endpoint is enabled, default = false
cluster_endpoint_private_access = true
vpc_id = module.vpc.vpc_id
# worker_groups VS node_groups
# node_groups are aws eks managed nodes whereas worker_groups are self managed nodes.
# Among many one advantage of worker_groups is that you can use your custom AMI for the nodes.
# Nodes launched as part of a managed node group are automatically tagged for auto-discovery by the Kubernetes cluster autoscaler.
# The Auto Scaling group of a managed node group spans every subnet that you specify when you create the group.
eks_managed_node_group_defaults = {
instance_types = var.node_instance_type
disk_size = var.node_ami_disk_size
ami_type = var.node_ami_type
create_launch_template = false
launch_template_name = ""
}
# EKS Add-ons -> https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
# https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
# https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/17.24.0/submodules/node_groups
# Make sure to use the version = 17.24.0
# Every managed node is provisioned as part of an Amazon EC2 Auto Scaling group that's managed for you by Amazon EKS.
eks_managed_node_groups = {
nodes = {
desired_size = var.nodes_desired_capacity
max_size = var.nodes_max_capacity
min_size = var.nodes_min_capacity
instance_types = var.node_instance_type
# With On-Demand Instances, you pay for compute capacity by the second, with no long-term commitments.
capacity_type = var.node_capacity_type
k8s_labels = {
Environment = "on_demand"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
additional_tags = {
ExtraTag = "on_demand-node"
}
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
data "aws_iam_openid_connect_provider" "cluster_oidc_arn" {
arn = module.eks.oidc_provider_arn
}
lb-controller.tf
module "load_balancer_controller_irsa_role" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
role_name = "load-balancer-controller"
attach_load_balancer_controller_policy = true
oidc_providers = {
ex = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
}
}
# tags = local.tags
depends_on = [
module.eks
]
}
helm.tf
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", var.eks_cluster_name]
command = "aws"
}
}
}
# AWS Load Balancer Controller
resource "helm_release" "load_balancer_controller" {
name = "load-balancer-controller-release"
repository = "https://aws.github.io/eks-charts/"
chart = "aws-load-balancer-controller"
namespace = "kube-system"
set {
name = "clusterName"
value = var.eks_cluster_name
}
set {
name = "serviceAccount.create"
value = true
}
set {
name = "image.repository"
# https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html
value = "602401143452.dkr.ecr.${var.region}.amazonaws.com/amazon/aws-load-balancer-controller"
}
set {
name = "serviceAccount.name"
value = "aws-load-balancer-controller"
}
depends_on = [
module.eks
]
}
I'm sorry but I'm not following this issue very well -
- What are you trying to do
- What do you expect to happen
- What actually happened
What are you trying to do
-> With eks-cluster.tf
and assumable/assumed role is created with which an eks cluster is created, say "role1" has the following -> permissions when code is executed.:
and with when the cluster is created, I need the cluster to have an "aws load balancer controller", so for that, I'm using code in lb-controller.tf
, policy under this code is also getting created and being attached to role "load-balancer-controller".
I'm deploying "aws load balancer controller" using helm.tf
What do you expect to happen
Even though all the code is working fine (I know there isn't any connector between both the roles).
I want the role created under eks-cluster.tf
to have either the role (which isn't possible) or the policies attached to the "load-balancer-controller" role.
What actually happened
Because of this, when the following annotations are used in service:
-
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
-
service.beta.kubernetes.io/aws-load-balancer-type: external
-
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
I'm unable to provision an "nlb" with "aws load balancer controller" even though the role for it is also getting created. "role created in eks-cluster.tf
is not authorized to perform the action.
Adding to above, I'm unsure about one thing, there are two roles being created, one with the "main eks module block { }" and one more role in the nested "eks_managed_node_groups { }" block inside the file eks-cluster.tf
.
From aws console, I can see one has "Trusted Entities = AWS Service: ec2 -> (eks_managed_node_groups { })
" and other is "Trusted Entities = AWS Service: eks -> (main eks module block { })
".
I'm not sure why there are two roles, I tried searching this online but was unable to get a legit explanation on this.
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue was automatically closed because of stale in 10 days
@ashishjullia I think I was having the same issue you're (almost) describing.
The issue:
Using the setup that @ashishjullia shared above and trying to deploy a TargetGroupBinding
will make kubectl
complain with:
error when creating "./kube/cluster_config/overlays/dev":
admission webhook "mtargetgroupbinding.elbv2.k8s.aws" denied the request:
unable to get target group IP address type:
NoCredentialProviders: no valid providers in chain. Deprecated.
I assume this is what you meant by:
I'm unable to provision an "nlb" with "aws load balancer controller" even though the role for it is also getting created. "role created in eks-cluster.tf is not authorized to perform the action.
The solution
I needed to create the role (as you're doing above):
module "alb_controller_role" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
role_name = "aws-load-balancer-controller-role"
attach_load_balancer_controller_policy = true
attach_load_balancer_controller_targetgroup_binding_only_policy = true
oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["my-namespace:controller-svc-account"]
}
}
}
and then create the service account, and link it using annotations:
resource "kubernetes_service_account" "lb_controller_svc_acc" {
metadata {
name = "controller-svc-account"
namespace = "my-namespace"
annotations = {
# this is what links the role to the service account
"eks.amazonaws.com/role-arn": module.alb_controller_role.iam_role_arn
}
}
}
and then make the controller use this service account:
AWS Load Balancer Controller
resource "helm_release" "load_balancer_controller" {
name = "load-balancer-controller-release"
namespace = "my-namespace" # must match the namespace of the svc account
set {
name = "clusterName"
value = your_cluster_name
}
set {
name = "serviceAccount.create"
value = false # the default service account always gave me the same error above.
}
set {
name = "serviceAccount.name"
value = "controller-svc-account" # use the name of the service account you created earlier.
}
... # other config
}
I spent three days trying to figure this out. Documentation seems to be a bit sparse when it comes to explaining how to link the created irsa role with a service account that can be used by the controller using terraform. Also the built in service account never worked for me either (using serviceAccount.create=true
always gave me the admission webhook denied the request
error).
@fadulalla Thanks for recognizing this as an issue + I was able to solve this as well, thanks for sharing^.
I'm going to lock this issue because it has been closed for 30 days β³. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.