eksctl icon indicating copy to clipboard operation
eksctl copied to clipboard

[Feature] Support Private-Endpoint Cluster Creation when VPC already exists

Open cthrasher opened this issue 2 years ago • 0 comments

Part of the discussion in #5503 revolves around eksctl needing to be able to reach the cluster API endpoint to join nodes to the cluster:

  1. EKS does allow creating a configuration that allows only private access to be enabled, but eksctl doesn't support it during cluster creation as it prevents eksctl from being able to join the worker nodes to the cluster.

It is explained that network connectivity to the VPC is required if public endpoint access is disabled:

eksctl must be run from within the same VPC (or via some other means like AWS Direct Connect) if public endpoint access is disabled, otherwise it cannot connect to the API server, eventually failing with a timeout error

However, in cases where the VPC already exists, it is highly likely that VPC access is also set up. In our case, we have direct VPC access via Transit Gateway from our company's internal network. DNS resolution of the API endpoint returns the VPC IP address of the endpoint, which is directly routable by eksctl when run from our company laptops. So in this circumstance it would be easier to create the cluster with PrivateNetworking enabled upon cluster creation.

The negative effects of this could be limited by only allowing this to happen when eksctl is not creating the VPC itself, and all the VPC parameters are specified in the YAML:

privateCluster:
  enabled: true
  skipEndpointCreation: true

vpc:
  id: vpc-0000000000000000
  subnets:
    private:
      us-east-1a:
        id: subnet-0000000000000000
      us-east-1b:
        id: subnet-0000000000000000    

cthrasher avatar Aug 25 '22 21:08 cthrasher