eksctl
eksctl copied to clipboard
Revert the avoided problematic zone in cn-north-1 as EKS now supporting control plane for all AZs in cn-north-1 region
Before 2022 Oct, there was a limitation to create EKS cluster against the AZ cnn1-az4 (it's mapped to cn-north-1d in most account), eksctl has a fix to avoid this AZ by default (#3916). Now EKS has supported creating control plane against all AZs in cn-north-1 region, it's time to revert this temporary fix in eksctl.
What feature/behavior/change do you want?
Revert the fix for #3916
Why do you want this feature?
EKS has supported creating control plane against all AZs in cn-north-1 region:
[ec2-user@ip-172-31-30-56 ~]$ eksctl create cluster --name bjs80-test --zones cn-north-1a,cn-north-1b,cn-north-1d --region cn-north-1
2022-10-16 11:26:59 [ℹ] eksctl version 0.115.0
2022-10-16 11:26:59 [ℹ] using region cn-north-1
2022-10-16 11:26:59 [ℹ] subnets for cn-north-1a - public:192.168.0.0/19 private:192.168.96.0/19
2022-10-16 11:26:59 [ℹ] subnets for cn-north-1b - public:192.168.32.0/19 private:192.168.128.0/19
2022-10-16 11:26:59 [ℹ] subnets for cn-north-1d - public:192.168.64.0/19 private:192.168.160.0/19
2022-10-16 11:26:59 [ℹ] nodegroup "ng-ca601265" will use "" [AmazonLinux2/1.23]
2022-10-16 11:26:59 [ℹ] using Kubernetes version 1.23
2022-10-16 11:26:59 [ℹ] creating EKS cluster "bjs80-test" in "cn-north-1" region with managed nodes
2022-10-16 11:26:59 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-10-16 11:26:59 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=cn-north-1 --cluster=bjs80-test'
2022-10-16 11:26:59 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "bjs80-test" in "cn-north-1"
2022-10-16 11:26:59 [ℹ] CloudWatch logging will not be enabled for cluster "bjs80-test" in "cn-north-1"
2022-10-16 11:26:59 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=cn-north-1 --cluster=bjs80-test'
2022-10-16 11:26:59 [ℹ]
2 sequential tasks: { create cluster control plane "bjs80-test",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-ca601265",
}
}
2022-10-16 11:26:59 [ℹ] building cluster stack "eksctl-bjs80-test-cluster"
2022-10-16 11:26:59 [ℹ] deploying stack "eksctl-bjs80-test-cluster"
2022-10-16 11:27:29 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:27:59 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:28:59 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:30:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:31:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:32:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:33:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:34:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:35:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:36:01 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:37:01 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:39:03 [ℹ] building managed nodegroup stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:39:03 [ℹ] deploying stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:39:03 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:39:33 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:40:10 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:41:12 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:42:51 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:42:51 [ℹ] waiting for the control plane to become ready
2022-10-16 11:42:51 [✔] saved kubeconfig as "/home/ec2-user/.kube/config"
2022-10-16 11:42:51 [ℹ] no tasks
2022-10-16 11:42:51 [✔] all EKS cluster resources for "bjs80-test" have been created
2022-10-16 11:42:52 [ℹ] nodegroup "ng-ca601265" has 2 node(s)
2022-10-16 11:42:52 [ℹ] node "ip-192-168-13-128.cn-north-1.compute.internal" is ready
2022-10-16 11:42:52 [ℹ] node "ip-192-168-80-198.cn-north-1.compute.internal" is ready
2022-10-16 11:42:52 [ℹ] waiting for at least 2 node(s) to become ready in "ng-ca601265"
2022-10-16 11:42:52 [ℹ] nodegroup "ng-ca601265" has 2 node(s)
2022-10-16 11:42:52 [ℹ] node "ip-192-168-13-128.cn-north-1.compute.internal" is ready
2022-10-16 11:42:52 [ℹ] node "ip-192-168-80-198.cn-north-1.compute.internal" is ready
2022-10-16 11:42:53 [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2022-10-16 11:42:53 [✔] EKS cluster "bjs80-test" in "cn-north-1" region is ready
Thanks for updating us. We'll revert the changes soon and allow creation of clusters in cnn1-az4
.
Now EKS has supported creating control plane against all AZs in cn-north-1 region
@walkley can you add any links to AWS docs, please? I can't seem to find any atm.
Now EKS has supported creating control plane against all AZs in cn-north-1 region
@walkley can you add any links to AWS docs, please? I can't seem to find any atm.
There's no public announcement/docs for this update, you may reach out to EKS PM to confirm the update.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This needs to be reverted.
This needs to be reverted.
Hi @cartermckinnon. We are aware but the team is occupied with other priorities. We will work on this soon.