terraform-aws-eks-blueprints icon indicating copy to clipboard operation
terraform-aws-eks-blueprints copied to clipboard

[QUESTION] Unable to use Nginx ingress on fargate

Open rohitjha941 opened this issue 2 years ago • 4 comments

Please describe your question here

I am trying to run Nginx ingress on Fargate. I added the namespace of Nginx, but it seems that the load balancer is not registering the target.

Additional context

Nginx Values YAML file

controller:
  service:
    externalTrafficPolicy: "Local"
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
      service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-ip-address-type: dualstack
  config:
    proxy-real-ip-cidr: ${cidr}
  image:
    allowPrivilegeEscalation: false

More

  • [x] Yes, I have checked the repo for existing issues before raising this question

rohitjha941 avatar Jun 07 '22 15:06 rohitjha941

Hi @rohitjha941 - would you mind sharing your Terraform configuration so I can investigate and reproduce?

bryantbiggs avatar Jun 07 '22 15:06 bryantbiggs

Addons Module

module "eks_blueprints_kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons"

  eks_cluster_id       = module.eks.eks.cluster_id
  eks_cluster_endpoint = module.eks.eks.cluster_endpoint
  eks_oidc_provider    = module.eks.eks.oidc_provider
  eks_cluster_version  = module.eks.eks.cluster_version

  enable_amazon_eks_aws_ebs_csi_driver = true

  # Add-ons
  enable_metrics_server = true
  enable_ingress_nginx  = true
  enable_karpenter      = true

  enable_aws_load_balancer_controller = true
  enable_aws_node_termination_handler = true

  ingress_nginx_helm_config = {
    version = "4.0.17"
    values = [templatefile("${path.module}/nginx_values.yaml", {
      cidr = module.vpc.vpc_cidr_block
    })]
  }

  aws_load_balancer_controller_helm_config = {
    values = [templatefile("${path.module}/alb_values.yaml", {
      eks_cluster_id = module.eks.eks.cluster_id
      aws_region     = data.aws_region.current.name
      repository     = "602401143452.dkr.ecr.us-east-2.amazonaws.com/amazon/aws-load-balancer-controller"
      vpcId          = module.vpc.vpc_id
    })]
  }

  depends_on = [
    module.eks
  ]
}

Cluster

module "eks" {
  source = "terraform-aws-modules/eks/aws"

  cluster_name    = var.cluster_name
  cluster_version = var.cluster_version
  vpc_id          = var.vpc.vpc_id
  subnet_ids      = concat(var.vpc.private_subnets, var.vpc.public_subnets)
  aws_auth_roles  = var.map_roles
  aws_auth_users  = var.map_users

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true
  enable_irsa                     = true
  create_cni_ipv6_iam_policy      = false
  manage_aws_auth_configmap       = true

  create_cluster_security_group = var.create_cluster_security_group
  create_node_security_group    = false
  node_security_group_id        = module.node_security_group.security_group_id

  tags = merge(var.tags, {
    "karpenter.sh/discovery" = var.cluster_name
  })

  eks_managed_node_group_defaults = {
    ami_type                   = "AL2_x86_64"
    disk_size                  = 50
    instance_types             = ["t3.medium"]
    iam_role_attach_cni_policy = true
    capacity_type              = "SPOT"
    create_launch_template     = false
    launch_template_name       = ""
    iam_role_additional_policies = [
      "arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore"
    ]
  }

  eks_managed_node_groups = {
    default_node_group = {
      min_size     = 1
      max_size     = 1
      desired_size = 1
    }
  }

  cluster_addons = {
    coredns = {
      resolve_conflicts = "OVERWRITE"
    }
    kube-proxy = {}
    vpc-cni = {
      resolve_conflicts = "OVERWRITE"
    }
  }

  fargate_profiles = {
    coredns = {
      name       = "coredns"
      subnet_ids = var.vpc.private_subnets
      selectors = [
        {
          namespace = "kube-system"
          labels = {
            k8s-app = "kube-dns"
          }
        }
      ]
    }
  }

  aws_auth_fargate_profile_pod_execution_role_arns = [
    for k in module.fargate_profile : k.fargate_profile_pod_execution_role_arn
  ]
}

Fargate Namespace

module "fargate_profile" {
  source   = "terraform-aws-modules/eks/aws//modules/fargate-profile"
  for_each = toset(var.fargate_namespaces)

  name            = each.key
  cluster_name    = module.eks.cluster_id
  subnet_ids      = var.vpc.private_subnets
  iam_role_arn    = aws_iam_role.fargate.arn
  create_iam_role = false

  selectors = [{
    namespace = each.key
  }]

  tags = merge(var.tags, { Separate = "fargate-profile" })
}

Namespace used for fargate

  fargate_namespaces = [
    "karpenter",
    "kube-system",
    "default",
    "vault",
    "ingress-nginx",
  ]

rohitjha941 avatar Jun 07 '22 20:06 rohitjha941

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] avatar Jul 18 '22 00:07 github-actions[bot]

Hi @rohitjha941 please see the following workaround to solve the issue https://github.com/kubernetes/ingress-nginx/issues/4888#issuecomment-916846889

NoamGoren avatar Aug 01 '22 07:08 NoamGoren

This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days

github-actions[bot] avatar Sep 04 '22 00:09 github-actions[bot]

Issue closed due to inactivity.

github-actions[bot] avatar Sep 14 '22 00:09 github-actions[bot]