terraform-aws-eks-blueprints icon indicating copy to clipboard operation
terraform-aws-eks-blueprints copied to clipboard

Using multiple addon modules to reduce the blast radius.

Open mark-hubers opened this issue 2 years ago • 8 comments

It seems no one talked about this idea that I tried, and seems to be working.

In my first setup I had about 6 to 7 add-ons all in one module as something like this:

module "eks_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.16.0"
   ...
   #K8s Add-ons
   enable_ingress_nginx  = true
   enable_amazon_prometheus  = true
   enable_karpenter = true
   ...

I had to fix a problem with Nginx by uninstalling it and setting it to false did not fully install. (I do not remember why) but I had to do a terraform destroy -target="module.eks_addons" and I knew it would hell as all my nodes managed by Kaspenter all got deleted, and every addon is gone. The other problem is when all add-ons are in one module, it makes it hard to update just a few add-ons at a time.

So I broke up the Add-Ons module into main sets of Add-ons. Something like this:

module "eks_addons_core" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.16.0"
   ...
  enable_amazon_eks_vpc_cni = true
  enable_amazon_eks_coredns = true
  enable_amazon_eks_kube_proxy = true
  enable_amazon_prometheus  = true
   ...
module "eks_addons_karpenter" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.16.0"
   ...
     enable_karpenter = true
     karpenter_helm_config = {
       version    = "0.16.3"
       repository = "https://charts.karpenter.sh/"
     }
   ...

Now I have more options to do things like let's just update Karpenter add-on: terraform apply -target="module.eks_addons_karpenter"

This seems to work, and I just wanted to share this idea and ask if anyone sees any problem with this, as, so far, it seems to work nicely for me.

mark-hubers avatar Nov 20 '22 16:11 mark-hubers

We do exactly this. each addon is its own module. that way can be upgraded individually, if need.

FernandoMiguel avatar Nov 21 '22 11:11 FernandoMiguel

actually, we do the addon module instead source = "git::ssh://[email protected]/aws-ia/terraform-aws-eks-blueprints.git//modules/kubernetes-addons/karpenter?ref=v4.16.0" no need to do the kubernetes-addons

FernandoMiguel avatar Nov 21 '22 11:11 FernandoMiguel

@FernandoMiguel Thanks for letting me know this idea works for you too and that you can even skip using 'Kubernetes-addons' as I am going to try that today.

mark-hubers avatar Nov 21 '22 13:11 mark-hubers

I think this is a great idea. @mark-hubers Let me know how that went and I am going to do the same.

sabinayakc avatar Nov 21 '22 22:11 sabinayakc

We been using this almost since day one. It's a standard terrraform approach. Nothing really new to this particular module.

FernandoMiguel avatar Nov 22 '22 08:11 FernandoMiguel

This is a great idea, thanks for sharing. Anyone know if this works when add-ons are managed in gitops mode via enable_argocd = true?

mikeinton avatar Nov 23 '22 16:11 mikeinton

not currently hooking it to argo-cd, so can't say.

FernandoMiguel avatar Nov 23 '22 17:11 FernandoMiguel

Interesting idea! This sounds like a good idea for us as well. How do you guys do when you want to add an addon that's not part of the module already? I was thinking, could it be an idea to add support for a "generic" setting as well? When you run the module in "single addon" mode, we can then add an arbitrary helm package. Then we can still benefit from the framework parts like local.addon_context under the hood. I.e something like this:

module "eks_addons_generic" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.16.0"
   ...
     enable_generic = true
     generic_helm_config = {
       version    = "0.16.3"
       repository = "https://charts.my-personal-charts.com/"
     }
   ...

So essentially adding the option to include one completely generic addon? Then we could still benefit from the framework around the addons.

apamildner avatar Dec 05 '22 15:12 apamildner

actually, we do the addon module instead source = "git::ssh://[email protected]/aws-ia/terraform-aws-eks-blueprints.git//modules/kubernetes-addons/karpenter?ref=v4.16.0" no need to do the kubernetes-addons

@FernandoMiguel I started down this path and found myself re-implementing locals.addon_context in my root module.

Though not challenging in itself, it did get me pondering if this was a good approach in general. May I ask, did you find yourself doing the same/similar?

Tommyf avatar Feb 04 '23 14:02 Tommyf

Yep

FernandoMiguel avatar Feb 04 '23 14:02 FernandoMiguel

This should be resolved now - we have since released the next "iteration" of EKS Blueprints addons here which is designed to have better stability. In addition, we have created a template module (which the EKS Blueprints addons module utilizes) which is a replacement of the prior helm-addon and irsa sub-modules.

Please take a look and let us know if there is any additional questions, comments, or feedback. Thank you!

bryantbiggs avatar Jun 07 '23 00:06 bryantbiggs

@bryantbiggs It's resolved how? Example:

module "eks_blueprints_addons" {
  source = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.0"

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  eks_addons = {
    aws-ebs-csi-driver = {
      most_recent = true
    }
    coredns = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
  }

  enable_karpenter                    = true
  enable_external_secrets             = true
  enable_aws_load_balancer_controller = true

  karpenter_node = {
    iam_role_use_name_prefix = false
  }

  aws_load_balancer_controller = {
    set = [
      {
        name  = "vpcId"
        value = module.vpc.vpc_id
      },
      {
        name  = "podDisruptionBudget.maxUnavailable"
        value = 1
      },
    ]
  }

  tags = {
    Environment = "dev"
  }
}

Will explode immediately on apply (and spectacularly on destroy) if external-secrets needs karpenter to provision nodes for external-secrets. If we do it through Fargate it becomes even worse, as it will routinely fail (randomly) with the likes of:

no endpoints available for service "external-secrets-webhook

or the same for load balancer. Because fargate profile was not ready and the pod was not provisioned "in time" (although dependency on module.eks_blueprints_addons is there). I.e. I need to run terraform apply three times to be absolutely sure everything really applied, lol. Literally I see external secrets pods not ready in kubectl, while dependencies of module.eks_blueprints_addons happily continue and continue to fail.

pkit avatar May 01 '24 16:05 pkit