pulumi-aws icon indicating copy to clipboard operation
pulumi-aws copied to clipboard

`eks.Cluster` gets created with `bootstrapClusterCreatorAdminPermissions` set to false despite showing true in CloudTrail

Open zbuchheit opened this issue 1 year ago • 4 comments

Describe what happened

When creating an eks.Cluster without access config set, CloudTrail displays

            "accessConfig": {
                "bootstrapClusterCreatorAdminPermissions": true,
                "authenticationMode": "CONFIG_MAP"
            }

However, when I look in Pulumi state I see the following

                    "accessConfig": {
                        "authenticationMode": "CONFIG_MAP",
                        "bootstrapClusterCreatorAdminPermissions": false
                    },

A refresh does not change the value of bootstrapClusterCreatorAdminPermissions to match CloudTrail either, and changing the value in my pulumi program to True triggers a replace.

Sample program

package main

import (
	"fmt"

	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/eks"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/iam"
	awsx "github.com/pulumi/pulumi-awsx/sdk/v2/go/awsx/ec2"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {

		numAZs := 2
		vpc, err := awsx.NewVpc(ctx, "my-vpc", &awsx.VpcArgs{
			SubnetSpecs: []awsx.SubnetSpecArgs{
				{
					Type: awsx.SubnetTypePublic,
				},
			},
			NatGateways: &awsx.NatGatewayConfigurationArgs{
				Strategy: awsx.NatGatewayStrategyNone,
			},
			NumberOfAvailabilityZones: &numAZs,
		})
		if err != nil {
			return err
		}

		eksRole, err := iam.NewRole(ctx, "eksRole", &iam.RoleArgs{
			AssumeRolePolicy: pulumi.String(`{
					"Version": "2012-10-17",
					"Statement": [{
						"Sid": "",
						"Effect": "Allow",
						"Principal": {
							"Service": "eks.amazonaws.com"
						},
						"Action": "sts:AssumeRole"
					}]
				}`),
		})
		if err != nil {
			return err
		}

		eksPolicies := []string{
			"arn:aws:iam::aws:policy/AmazonEKSServicePolicy",
			"arn:aws:iam::aws:policy/AmazonEKSClusterPolicy",
		}
		for i, eksPolicy := range eksPolicies {
			_, err := iam.NewRolePolicyAttachment(ctx, fmt.Sprintf("rpa-%d", i), &iam.RolePolicyAttachmentArgs{
				PolicyArn: pulumi.String(eksPolicy),
				Role:      eksRole.Name,
			})
			if err != nil {
				return err
			}
		}

		cluster, err := eks.NewCluster(ctx, "zbuchheit-cluster", &eks.ClusterArgs{
			RoleArn: eksRole.Arn,
			Version: pulumi.String("1.29"),
			VpcConfig: &eks.ClusterVpcConfigArgs{
				SubnetIds: vpc.PublicSubnetIds,
			},
			// AccessConfig: &eks.ClusterAccessConfigArgs{
			// 	AuthenticationMode:                      pulumi.String("CONFIG_MAP"),
			// 	BootstrapClusterCreatorAdminPermissions: pulumi.Bool(true),
			// },
		})
		if err != nil {
			return err
		}

		ctx.Export("clusterID", cluster.ID())
		return nil
	})
}

Log output

N/A

Affected Resource(s)

AWS EKS Cluster

Output of pulumi about

CLI          
Version      3.120.0
Go Version   go1.22.4
Go Compiler  gc

Plugins
KIND      NAME    VERSION
resource  aws     6.37.1
resource  awsx    2.11.0
language  go      unknown

Host     
OS       darwin
Version  14.2.1
Arch     arm64

This project is written in go: executable='/opt/homebrew/bin/go' version='go version go1.22.3 darwin/arm64'

Current Stack: zbuchheit-pulumi-corp/aws-go-eks-cluster/dev

TYPE                                                 URN
pulumi:pulumi:Stack                                  urn:pulumi:dev::aws-go-eks-cluster::pulumi:pulumi:Stack::aws-go-eks-cluster-dev
pulumi:providers:aws                                 urn:pulumi:dev::aws-go-eks-cluster::pulumi:providers:aws::default_6_37_1
pulumi:providers:awsx                                urn:pulumi:dev::aws-go-eks-cluster::pulumi:providers:awsx::default_2_11_0
awsx:ec2:Vpc                                         urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc::my-vpc
aws:iam/role:Role                                    urn:pulumi:dev::aws-go-eks-cluster::aws:iam/role:Role::eksRole
aws:iam/rolePolicyAttachment:RolePolicyAttachment    urn:pulumi:dev::aws-go-eks-cluster::aws:iam/rolePolicyAttachment:RolePolicyAttachment::rpa-0
aws:iam/rolePolicyAttachment:RolePolicyAttachment    urn:pulumi:dev::aws-go-eks-cluster::aws:iam/rolePolicyAttachment:RolePolicyAttachment::rpa-1
aws:ec2/vpc:Vpc                                      urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc::my-vpc
aws:ec2/internetGateway:InternetGateway              urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/internetGateway:InternetGateway::my-vpc
aws:ec2/subnet:Subnet                                urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::my-vpc-public-2
aws:ec2/subnet:Subnet                                urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet::my-vpc-public-1
aws:ec2/routeTable:RouteTable                        urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::my-vpc-public-2
aws:ec2/routeTable:RouteTable                        urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable::my-vpc-public-1
aws:ec2/routeTableAssociation:RouteTableAssociation  urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::my-vpc-public-2
aws:ec2/routeTableAssociation:RouteTableAssociation  urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/routeTableAssociation:RouteTableAssociation::my-vpc-public-1
aws:ec2/route:Route                                  urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::my-vpc-public-2
aws:ec2/route:Route                                  urn:pulumi:dev::aws-go-eks-cluster::awsx:ec2:Vpc$aws:ec2/vpc:Vpc$aws:ec2/subnet:Subnet$aws:ec2/routeTable:RouteTable$aws:ec2/route:Route::my-vpc-public-1
pulumi:providers:pulumi                              urn:pulumi:dev::aws-go-eks-cluster::pulumi:providers:pulumi::default
aws:eks/cluster:Cluster                              urn:pulumi:dev::aws-go-eks-cluster::aws:eks/cluster:Cluster::zbuchheit-cluster


Found no pending operations associated with dev

Backend        
Name           pulumi.com
URL            https://app.pulumi.com/zbuchheit-pulumi-corp
User           zbuchheit-pulumi-corp
Organizations  zbuchheit-pulumi-corp, team-ce, demo, pulumi
Token type     personal

Dependencies:
NAME                                  VERSION
github.com/pulumi/pulumi-aws/sdk/v6   v6.37.1
github.com/pulumi/pulumi-awsx/sdk/v2  v2.11.0
github.com/pulumi/pulumi/sdk/v3       v3.119.0

Additional context

I suspect this could be related to https://github.com/pulumi/pulumi-aws/issues/3997 and possibly an upstream issue

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

zbuchheit avatar Jun 21 '24 22:06 zbuchheit

Also appears to be an upstream issue, seeing the same behavior with the upstream provider

zbuchheit avatar Jun 21 '24 23:06 zbuchheit

I believe this behavior is covered by the behavior described in this upstream issue

zbuchheit avatar Jun 22 '24 00:06 zbuchheit

@zbuchheit it sounds like the workaround for this issue is to just leave bootstrapClusterCreatorAdminPermissions blank. Is that a viable workaround or is there a reason why after creation it needs to be set explicitly to true?

corymhall avatar Jun 24 '24 12:06 corymhall

This is indeed caused by https://github.com/pulumi/pulumi-aws/issues/3997.

@corymhall the problem is that users create their cluster thinking that the current IAM principal gets admin access (because the docs say so), but in reality that doesn't happen.

Now if the cluster is set to CONFIG_MAP authentication mode users effectively locked themselves out of the cluster. One way to work around this is by changing the authentication mode to API_AND_CONFIG_MAP or API and creating the necessary access entries to grant your IAM principal access to the cluster: https://www.pulumi.com/registry/packages/aws/api-docs/eks/accessentry/

flostadler avatar Jun 24 '24 16:06 flostadler

This issue has been addressed in PR #4217 and shipped in release v6.45.0.

pulumi-bot avatar Jul 16 '24 20:07 pulumi-bot