[EKS controller]: can't create a node group
Hello!
I tried to use the next manifest
apiVersion: eks.services.k8s.aws/v1alpha1
kind: Nodegroup
metadata:
name: production-fix-ng-20241010-b
namespace: infra-production
spec:
name: production-fix-ng-20241010-b
clusterName: production
diskSize: 100
subnetRefs:
- from:
name: production-private-eu-west-2b
nodeRole: arn:aws:iam::****:role/*****
scalingConfig:
minSize: 1
maxSize: 1
desiredSize: 1
instanceTypes:
- m5.large
taints:
- key: node-role.kubernetes.io/fix
value: ""
effect: "NO_SCHEDULE"
labels:
node-role.kubernetes.io/fix: ""
amiType: BOTTLEROCKET_x86_64
I am getting the next error in the status:
status:
ackResourceMetadata:
ownerAccountID: '******'
region: eu-west-2
conditions:
- lastTransitionTime: '2024-11-10T11:59:37Z'
status: 'True'
type: ACK.ReferencesResolved
- message: |-
InvalidParameterException: Invalid value: : field must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
{
RespMetadata: {
StatusCode: 400,
RequestID: "c371ba24-6718-412a-96c2-3073d7041b3d"
},
ClusterName: "production",
Message_: "Invalid value: : field must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')",
NodegroupName: "production-fix-ng-20241010-b"
}
status: 'True'
type: ACK.Terminal
- lastTransitionTime: '2024-11-10T11:59:38Z'
message: Resource not synced
reason: resource is in terminal condition
status: 'False'
type: ACK.ResourceSynced
It is very ambiguous error and does not give me any information about what is wrong.
It helped to remove the label, but it is really strange as in the doc there are no restrictions:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-nodegroup.html#cfn-eks-nodegroup-labels
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale