pulumi-eks
pulumi-eks copied to clipboard
nodeGroupOptions kubeletExtraArgs and nodeUserData not taken in account in ManagedNodeGroup
What happened?
I create an eks cluster with no default nodegroup.
Then I create a ManagedNodegroup in this cluster.
I need to pass kubeletExtraArgs to configure kubelet on the managed nodegroup nodes.
When passing this parameter (or nodeUserData) in the nodeGroupOptions is set, this is not reflected in the managed nodegroup.
I also don't see a way to pass when creating the managed nodegroup resource.
Expected Behavior
The parameter to be taken into account.
Steps to reproduce
As indicated in what happened. Here is an example of the code:
const cluster = new eks.Cluster(
clusterName,
{
name: clusterName,
version: '1.23',
vpcId: vpc.id,
publicSubnetIds: vpc.publicSubnetIds,
privateSubnetIds: vpc.privateSubnetIds,
nodeAssociatePublicIpAddress: false,
roleMappings: userRoleMappings,
instanceRoles: [nodeRole], // instanceRole => instanceProfile
nodeGroupOptions: {
instanceProfile,
nodeUserData: '#!/bin/bash\n sed -i "" s/172.20.0.10/169.254.20.10/g ./config.json";',
},
skipDefaultNodeGroup: true,
createOidcProvider: true,
...
Same for the kubeletExtraArgs argument.
Deploy with pulumi up and check on the nodes that it has been applied. It is not the case.
Output of pulumi about
CLI
Version 3.57.1
Go Version go1.20.2
Go Compiler gc
Plugins NAME VERSION auth0 2.14.0 aws 5.24.0 aws 5.16.2 awsx 1.0.1 datadog 4.12.0 docker 3.6.1 eks 1.0.1 kubernetes 3.23.1 nodejs unknown postgresql 3.6.0
Host
OS darwin
Version 13.2.1
Arch arm64
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
That certainly seems like a bug from the description -- thank you for bringing this to our attention, and the instructions for reproducing the problem. I'll put it on our board so we can take a closer look.
This definitely is a bug . It does not work with pulumi_eks.ManagedNodeGroup
I have set the kubelet extra args but nothing happens -
const cluster = new eks.Cluster(`${common.name_prefix}-cluster`, {
name: `${common.name_prefix}-cluster`,
vpcId: common.aws_vpc.id,
skipDefaultNodeGroup: true,
endpointPrivateAccess: true,
endpointPublicAccess: false,
privateSubnetIds: common.privateSubnetIDs,
createOidcProvider: true,
kubernetesServiceIpAddressRange: "172.20.0.0/16",
serviceRole: iam.eks_service_role,
version: common.eks_version,
clusterSecurityGroup: securitygroup.cluster_securitygroup,
nodeGroupOptions: {
nodeSecurityGroup: securitygroup.node_securitygroup,
instanceProfile: iam.eks_node_instance_profile,
nodeSubnetIds: common.privateSubnetIDs,
extraNodeSecurityGroups: [securitygroup.node_securitygroup],
nodeAssociatePublicIpAddress: false,
autoScalingGroupTags: {
[`k8s.io/cluster-autoscaler/${common.name_prefix}-cluster`]: "true",
"Name": `${common.name_prefix}-workernodes`,
"k8s.io/cluster-autoscaler/enabled": "true"
},
bootstrapExtraArgs: "--use-max-pods false --kubelet-extra-args '--max-pods=110'"
},
// instanceProfileName: iam.eks_node_instance_profile.name,
enabledClusterLogTypes: [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
],
instanceRoles: [
iam.eks_instance_role
],
useDefaultVpcCni: true,
vpcCniOptions: {
cniCustomNetworkCfg: true
},
tags: { Name: `${common.project}-${common.stack}-cluster` },
})
@defyjoy just curious here, does setting extraNodeSecurityGroups: [securitygroup.node_securitygroup] work for you?
For me, the nodes does not have the extra SG attached. more about this here: https://github.com/pulumi/pulumi-eks/issues/841#issuecomment-1486472888