cortex
cortex copied to clipboard
Node group ARN is getting truncated
Description
When creating a cluster with node group names that have more than a few characters, then ARN names of the corresponding node groups are getting truncated. Is this a cause for concern?
Example
2021-03-15 22:51:23 [✔] all EKS cluster resources for "test-22" have been created 2021-03-15 22:51:23 [ℹ] adding identity "arn:aws:iam::499593605069:role/eksctl-test-22-nodegroup-cx-opera-NodeInstanceRole-193O7FLCFOZJ8" to auth ConfigMap 2021-03-15 22:51:23 [ℹ] nodegroup "cx-operator" has 0 node(s) 2021-03-15 22:52:07 [ℹ] nodegroup "cx-operator" has 2 node(s)ome ready in "cx-operator" 2021-03-15 22:52:07 [ℹ] node "ip-192-168-19-50.ec2.internal" is ready 2021-03-15 22:52:07 [ℹ] node "ip-192-168-39-127.ec2.internal" is ready 2021-03-15 22:52:07 [ℹ] adding identity "arn:aws:iam::499593605069:role/eksctl-test-22-nodegroup-cx-ws-cp-NodeInstanceRole-1PWJXEG5X2YOA" to auth ConfigMap 2021-03-15 22:52:07 [ℹ] nodegroup "cx-ws-cpu-spot" has 0 node(s) 2021-03-15 22:52:55 [ℹ] nodegroup "cx-ws-cpu-spot" has 1 node(s) ready in "cx-ws-cpu-spot"
2021-03-15 22:52:55 [ℹ] node "ip-192-168-54-170.ec2.internal" is ready
2021-03-15 22:52:55 [ℹ] adding identity "arn:aws:iam::499593605069:role/eksctl-test-22-nodegroup-cx-wd-cp-NodeInstanceRole-H02ZU6FRUZBH" to auth ConfigMap
2021-03-15 22:52:55 [ℹ] nodegroup "cx-wd-cpu" has 0 node(s)
2021-03-15 22:52:55 [ℹ] waiting for at least 1 node(s) to become ready in "cx-wd-cpu"
2021-03-15 22:53:46 [ℹ] nodegroup "cx-wd-cpu" has 1 node(s)
2021-03-15 22:53:46 [ℹ] node "ip-192-168-54-20.ec2.internal" is ready
2021-03-15 22:53:46 [ℹ] adding identity "arn:aws:iam::499593605069:role/eksctl-test-22-nodegroup-cx-ws-gp-NodeInstanceRole-1S2VZVQT1O47H" to auth ConfigMap
2021-03-15 22:53:46 [ℹ] nodegroup "cx-ws-gpu-spot" has 0 node(s)
2021-03-15 22:53:46 [ℹ] waiting for at least 1 node(s) to become ready in "cx-ws-gpu-spot"
2021-03-15 22:55:01 [ℹ] nodegroup "cx-ws-gpu-spot" has 1 node(s)
2021-03-15 22:55:01 [ℹ] node "ip-192-168-45-130.ec2.internal" is ready
2021-03-15 22:55:01 [ℹ] adding identity "arn:aws:iam::499593605069:role/eksctl-test-22-nodegroup-cx-wd-gp-NodeInstanceRole-11FHMBS4WAUV9" to auth ConfigMap
2021-03-15 22:55:02 [ℹ] nodegroup "cx-wd-gpu" has 0 node(s)
2021-03-15 22:55:02 [ℹ] waiting for at least 1 node(s) to become ready in "cx-wd-gpu"
2021-03-15 22:57:27 [ℹ] nodegroup "cx-wd-gpu" has 1 node(s)
2021-03-15 22:57:27 [ℹ] node "ip-192-168-23-13.ec2.internal" is ready
2021-03-15 22:57:27 [ℹ] adding identity "arn:aws:iam::499593605069:role/eksctl-test-22-nodegroup-cx-wd-in-NodeInstanceRole-86YCX97VWLR9" to auth ConfigMap
2021-03-15 22:57:28 [ℹ] kubectl command should work with "/root/.kube/config", try 'kubectl get nodes'
2021-03-15 22:57:28 [✔] EKS cluster "test-22" in "us-east-1" region is ready