terraform-aws-eks
terraform-aws-eks copied to clipboard
Adding additional policy to node group IAM role
Description
We are creating multiple clusters using same set of terraform files and variable values for each one of them is set using tfvar
files. Each of our clusters can have different set of eks managed node groups and these are controlled by setting create
flag in the respective tfvar
file for the cluster.
eks_managed_node_groups = [
{
create = var.node_group_configuration.basic_a_arm_r6g.create
name = "${var.cluster_name}-arm-r6"
node_group_name_prefix = "${var.cluster_name}-arm-r6"
launch_template_name = "basic-a-arm-r6g-"
instance_types = ["r6g.xlarge"]
desired_size = var.node_group_configuration.basic_a_arm_r6g.desired_size
min_size = var.node_group_configuration.basic_a_arm_r6g.min_size
max_size = var.node_group_configuration.basic_a_arm_r6g.max_size
subnets = [module.vpc.private_subnets[var.node_group_configuration.basic_a_arm_r6g.subnet_index]]
additional_tags = {
Name = "${local.full_cluster_name}-basic-a-arm-r6g"
}
},
{
create = var.node_group_configuration.intel_m5.create
name = "${var.cluster_name}-intel-m5"
node_group_name_prefix = "${var.cluster_name}-intel-m5"
ami_type = "AL2_x86_64"
launch_template_name = "intel-m5-"
instance_types = ["m5.xlarge"]
desired_size = var.node_group_configuration.intel_m5.desired_size
min_size = var.node_group_configuration.intel_m5.min_size
max_size = var.node_group_configuration.intel_m5.max_size
subnets = [module.vpc.private_subnets[var.node_group_configuration.intel_m5.subnet_index]]
additional_tags = {
Name = "${local.full_cluster_name}-intel-m5"
}
]
In addition to this, we are setting additional policy to the node groups. As per this comment to avoid the terraform issue with for-each, we are using aws_iam_role_policy_attachment
to attach the additional policy.
resource "aws_iam_role_policy_attachment" "additional" {
for_each = module.eks.eks_managed_node_groups
policy_arn = aws_iam_policy.s3_write_policy.arn
role = each.value.iam_role_name
}
Let's take a simple example of eks
module creating cluster with 2 node groups and below scenarios,
- ClusterA: used both the node groups
- ClusterB: uses only 1 node group
When we try to deploy these 2 clusters, ClusterA is successfully deployed. For ClusterB, the IAM policy is updated for the node group which is enabled to be created but fails with:
| aws_iam_role_policy_attachment.additional["0"]: Creation complete after 0s [id=arm-r6-eks-node-group-20220429042911230300000001-20220429091830789400000002]
β Error: Error attaching policy arn:aws:iam::1233333:policy/additional-policy to IAM Role : InvalidParameter: 1 validation error(s) found.
β - minimum field size of 1, AttachRolePolicyInput.RoleName.
β
β
β with aws_iam_role_policy_attachment.additional["1"],
β on cluster.tf line 173, in resource "aws_iam_role_policy_attachment" "additional":
β 173: resource "aws_iam_role_policy_attachment" "additional" {
This seems to indicate that the output module.eks.eks_managed_node_groups
is returning both the node group names irrespective of create
flag set to true
or false
.
To work around this we tried to parse the output module.eks.eks_managed_node_groups
and add condition to check if there is iam_role_name
is set.
resource "aws_iam_role_policy_attachment" "additional" {
for_each = {
for node_group, group_details in module.eks.eks_managed_node_groups : node_group => group_details
# We have to add if condition as the module output contains all node names,
# even if they are not created.
if group_details.iam_role_name != ""
}
policy_arn = aws_iam_policy.s3_write_policy.arn
role = each.value.iam_role_name
}
But, this seems doesn't seem help and we'll end up in the same terraform issue with for-each.
β·
β Error: Invalid for_each argument
β
β on cluster.tf line 175, in resource "aws_iam_role_policy_attachment" "additional":
β 175: for_each = {
β 176: for node_group, group_details in module.eks.eks_managed_node_groups : node_group => group_details
β 177: # We have to add if condition as the module output contains all node names,
β 178: # even if they are not created.
β 179: if group_details.iam_role_name != ""
β 180: }
β βββββββββββββββββ
β β module.eks.eks_managed_node_groups is object with 2 attributes
β
β The "for_each" value depends on resource attributes that cannot be
β determined until apply, so Terraform cannot predict how many instances will
β be created. To work around this, use the -target argument to first apply
β only the resources that the for_each depends on.
Could you please see if the output for module.eks.eks_managed_node_groups
can be set to only use the values for node groups which are getting created?
Also, please suggest any alternate options for this so that we can overcome this blocker for us.
Versions
-
Module version [Required]: 18.20.5
-
Terraform version: 1.1.7
-
Provider version(s): 4.12.1
Same issue exists for self_managed_node_groups and iam_role_additional_policies
β Error: Invalid for_each argument
β
β on .terraform/modules/my_cluster.eks/modules/self-managed-node-group/main.tf line 533, in resource "aws_iam_role_policy_attachment" "this":
β 533: for_each = var.create && var.create_iam_instance_profile ? toset(compact(distinct(concat([
β 534: "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
β 535: "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
β 536: ], var.iam_role_additional_policies)))) : toset([])
β βββββββββββββββββ
β β local.iam_role_policy_prefix is "arn:aws:iam::aws:policy"
β β var.create is true
β β var.create_iam_instance_profile is true
β β var.iam_role_additional_policies is a list of string, known only after apply
β
β The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
There was an issue tracking upgrades https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1744, but this issue is for both upgrades and fresh installs.
This issue https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1988 mentions using a policy doc to get around this limitation.
This was probably just not tested, in light of terraforms requirement that are arguments to a for_each be known at time of plan.
I have a fork of EKS module and I am experimenting a different way to handle this, the output for the IAM role created for each esg was removed so we can no longer easily add new policies after the fact.
the output for the IAM role created for each esg was removed so we can no longer easily add new policies after the fact.
This is available and is one of the recommended ways to getting around this issue currently https://github.com/terraform-aws-modules/terraform-aws-eks/blob/d969e94bbea89ed616713bdab849f042ceda87ac/modules/self-managed-node-group/outputs.tf#L111
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/eks_managed_node_group/main.tf#L352-L362
the output for the IAM role created for each esg was removed so we can no longer easily add new policies after the fact.
This is available and is one of the recommended ways to getting around this issue currently
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/d969e94bbea89ed616713bdab849f042ceda87ac/modules/self-managed-node-group/outputs.tf#L111
https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/eks_managed_node_group/main.tf#L352-L362
Awesome, thank you, i saw that worker_iam_role_name was removed I was looking for if it was replaced by something.
@bryantbiggs - Regarding the output module.eks.eks_managed_node_groups
, do you suggest any workaround or have plans to limit the output to list on the node groups which have create
set to true
?
@chandanpjain no - the output provided is sufficient I believe
#1988
@Luwdo did you manage to implement any workaround? I have the same problem and I found no solution yet.
@chandanpjain did you try using https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/eks_managed_node_group/main.tf#L314
iam_role_additional_policies = [
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
]
@pen-pal
using iam_role_additional_policies
with eks_managed_node_groups
triggers prefix length overflow:
Error: expected length of name_prefix to be in the range (1 - 38), got iam_role_additional_policies-eks-node-group-
β
β with module.eks.module.eks_managed_node_group["iam_role_additional_policies"].aws_iam_role.this[0],
β on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 435, in resource "aws_iam_role" "this":
β 435: name_prefix = var.iam_role_use_name_prefix ? "${local.iam_role_name}-" : null
@pen-pal
using
iam_role_additional_policies
witheks_managed_node_groups
triggers prefix length overflow:Error: expected length of name_prefix to be in the range (1 - 38), got iam_role_additional_policies-eks-node-group- β β with module.eks.module.eks_managed_node_group["iam_role_additional_policies"].aws_iam_role.this[0], β on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 435, in resource "aws_iam_role" "this": β 435: name_prefix = var.iam_role_use_name_prefix ? "${local.iam_role_name}-" : null
nope, that would be the name used on the IAM role - it does not have anything to do with additional policies
@liqwid You need to declare it inside each node group definition
This issue has been resolved in version 19.0.0 :tada:
I'm going to lock this issue because it has been closed for 30 days β³. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.