pulumi-eks icon indicating copy to clipboard operation
pulumi-eks copied to clipboard

Failed to provision cluster with only private endpoint enabled

Open RaviVadera opened this issue 10 months ago • 6 comments

What happened?

When creating a eks.Cluster with only private endpoint enabled, aws-auth resource fails to be created and results in resource creation fail

What happened Pulumi fails with,

@ updating......
 +  kubernetes:core/v1:ConfigMap xxx-cluster-nodeAccess creating (2s) error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "https://xxx.xxx.eks.amazonaws.com/openapi/v2?timeout=32s": Service Unavailable
 +  kubernetes:core/v1:ConfigMap dev-monitor-cluster-nodeAccess **creating failed** error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "https://xxx.xxx.eks.amazonaws.com/openapi/v2?timeout=32s": Service Unavailable

Expected to happen Pulumi shold not create the config map resource, in case of only private API endpoint enabled.

Example

Create a cluster with following params,

endpointPublicAccess: false,
endpointPrivateAccess: true,

Output of pulumi about

CLI
Version      3.113.3
Go Version   go1.22.2
Go Compiler  gc

Plugins
KIND      NAME        VERSION
resource  aws         6.31.0
resource  awsx        2.7.0
resource  docker      4.5.3
resource  docker      3.6.1
resource  eks         2.3.0
resource  kubernetes  4.10.0
language  nodejs      unknown
resource  random      4.16.1

Host
OS       ubuntu
Version  22.04
Arch     x86_64

This project is written in nodejs: executable='/usr/bin/node' version='v18.20.0'

Current Stack: organization/xxx/xxx

xxxxxxxx

Found no pending operations associated with dev

Backend
Name           xxxxx
URL            s3://xxxx
User           xxxx
Organizations
Token type     personal

Dependencies:
NAME                              VERSION
@typescript-eslint/eslint-plugin  7.7.0
eslint                            8.57.0
@pulumi/eks                       2.3.0
@pulumi/pulumi                    3.113.0
@pulumi/kubernetes                4.10.0
proxy-agent                       6.4.0
@types/node                       20.12.3
@typescript-eslint/parser         7.7.0
eslint-config-prettier            9.1.0
eslint-plugin-prettier            5.1.3
prettier                          3.2.5
@pulumi/awsx                      2.7.0
@pulumi/random                    4.16.1
@pulumi/aws                       6.31.0
yaml                              2.4.1

Pulumi locates its logs in /tmp by default

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

RaviVadera avatar Apr 25 '24 07:04 RaviVadera

@rquitales Were you able to repro the issue? What is your feedback on it?

mikhailshilkov avatar May 10 '24 10:05 mikhailshilkov

This issue was created alongside #1133 and relates to accessing the cluster's API Server to perform on-cluster actions. As the cluster does not have a public endpoint, our provider will be unable to perform these actions. This could potentially be resolved with: #1027 as this will defer the auth setup to be handled by AWS.

rquitales avatar May 10 '24 19:05 rquitales

Thank you. What is this issue (1134) tracking then?

mikhailshilkov avatar May 13 '24 08:05 mikhailshilkov

1133 tracks disabling the health checking. This issue (1134) tracks the AWS auth related configmap updates required on cluster. As the cluster is private, we can't update the configmap currently.

rquitales avatar May 13 '24 18:05 rquitales

I just wanted to point to this issue as well https://github.com/pulumi/pulumi-eks/issues/1191. would be great if they can all be fixed together. I think just respecting the proxy config should do the trick for the time being

miadabrin avatar Jun 12 '24 14:06 miadabrin

I just wanted to point to this issue as well #1191. would be great if they can all be fixed together. I think just respecting the proxy config should do the trick for the time being

Depends on how do you plan to provision the cluster once created. Respecting proxy config does not fix the issue fully since Pulumi up will still fail with unreachable API if you do not provide any proxy URL.

In theory, I would expect,

  • if only private API endpoint is enabled, the eks.Cluster resource should fallback to basic behaviour of aws.eks.Cluster.
  • private API endpoint + tunneling through bastion to provision cluster, should allow all resources supported by eks.Cluster.

RaviVadera avatar Jul 08 '24 14:07 RaviVadera

Hi,

Is there an update on that? As I'm using a bastion to proxy to provision the cluster in GO

Sindvero avatar Sep 24 '24 21:09 Sindvero

@Sindvero, you could start using AccessEntries with the authenticationMode set to API. This will not require direct cluster connectivity.

We're actively working to reduce the reliance on cluster connectivity. We're addressing https://github.com/pulumi/pulumi-eks/issues/1191 in EKS v3 and will switch the VPC CNI to use EKS addons instead.

The underlying problem of this issue can now be worked around with AccessEntries. Please have a look at this guide for migration instructions: https://github.com/pulumi/pulumi-eks/blob/master/docs/authentication-mode-migration.md. I'm closing this issue in the meantime, but please do no hesitate to open a new issue in case you run into more problems!

flostadler avatar Sep 25 '24 13:09 flostadler