terraform-aws-eks
terraform-aws-eks copied to clipboard
Dependency Cycle Error on example code eks_managed_node_group #2557 should not be locked
Description
The issue described in #2557 has not been explored fully, unless I grossly misunderstand it, in which case I apologize. I understand that the creation of clusters with for_each works very well, I've been doing it for months, but now that I have been trying to use iam-role-for-service-accounts-eks, I'm having a dependency cycle. I have to add the OICD provider of the cluster I'm creating in the trust policy of the IAM role I'm creating for an addon (in this case, the EBS-CSI-Provisioner, but I guess it'd be the same for all)
- [x] β I have searched the open/closed issues and my issue is not listed.
Versions
-
Module version [Required]:
-
Terraform version: terraform version Terraform v1.4.6 on darwin_amd64
- provider registry.terraform.io/carlpett/sops v0.7.2
- provider registry.terraform.io/gitlabhq/gitlab v3.20.0
- provider registry.terraform.io/hashicorp/aws v4.66.0
- provider registry.terraform.io/hashicorp/cloudinit v2.3.2
- provider registry.terraform.io/hashicorp/kubernetes v2.20.0
- provider registry.terraform.io/hashicorp/local v2.4.0
- provider registry.terraform.io/hashicorp/random v3.5.1
- provider registry.terraform.io/hashicorp/time v0.9.1
- provider registry.terraform.io/hashicorp/tls v3.4.0
- Provider version(s):
terraform-aws-eks is at version 19.13.1
Reproduction Code [Required]
module "eks" {
for_each = toset(var.clusters)
source = "terraform-aws-modules/eks/aws"
version = "19.13.1"
cluster_name = "cluster-${var.environment_name}-${each.key}"
cluster_version = "1.24" # var.k8s_version
subnet_ids = module.vpc.private_subnets
enable_irsa = true
vpc_id = module.vpc.vpc_id
cluster_endpoint_public_access = true
tags = {
Environment = var.environment_name
Instance = each.key
}
cluster_addons = {
aws-ebs-csi-driver = {
service_account_role_arn = module.iam_eks_role_ebs_csi_driver.iam_role_arn
resolve_conflicts = "OVERWRITE"
}
}
}
[...]
module "iam_eks_role_ebs_csi_driver" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
role_name = "${var.environment_name}-ebs-csi-driver"
attach_ebs_csi_policy = true
oidc_providers = { for provider in [for provider in values(module.eks)[*].oidc_provider_arn : provider] : provider => {
provider_arn = provider
namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"]
}}
}
since the eks
module is instantiated with a for_each
I have to make the oicd_provider
map for the iam_eks_role_ebs_csi_driver
module dynamic, and this apparently results in a cycle
Error: Cycle: module.environment_dev.module.eks.aws_eks_addon.before_compute, module.environment_dev.module.eks.aws_eks_addon.this, module.environment_dev.module.eks.output.cluster_addons (expand), module.environment_dev.module.iam_eks_role_ebs_csi_driver.var.oidc_providers (expand), module.environment_dev.module.iam_eks_role_ebs_csi_driver.data.aws_iam_policy_document.this, module.environment_dev.module.iam_eks_role_ebs_csi_driver.aws_iam_role.this, module.environment_dev.module.iam_eks_role_ebs_csi_driver.output.iam_role_arn (expand), module.environment_dev.module.eks.var.cluster_addons (expand), module.environment_dev.module.eks.data.aws_eks_addon_version.this, module.environment_dev.module.eks (close)
Honestly I'm not sure why it would work when not using a for_each either, but your IRSA examples seem to cross reference between modules (but crucially not using for_each for any of them), so I guess in that case terraform is able to resolve the right order of activities.
As a workaround, I will be assigning the EBS-CSI policy to the nodegroup (untested yet, but I've seen in other issues that it's working)
I confirm that the workaround is working (assigning the EBS policy to the nodegroup role, not to the service account role of the EBS-CSI driver).
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue was automatically closed because of stale in 10 days
I'm going to lock this issue because it has been closed for 30 days β³. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.