cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Private only subnets topology cannot get the information of EC2 instances from EKS cluster
/kind bug
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
I followed the instruction from https://cluster-api-aws.sigs.k8s.io/topics/bring-your-own-aws-infrastructure.html to use existing vpc to create EKS Clusters.
I used kind cluster based on the get started guide.
clusterctl generate cluster capi-eks-quickstart --flavor eks --kubernetes-version v1.26.0 --worker-machine-count=2 > capi-eks-quickstart.yaml
I used VPC that has two private subnets and route tables and I specified ids into the yaml file.
At the end, the EKS cluster and EC2 instances run but I cannot get the EC2 information by EKS cluster.
When typing clusterctl get kubeconfig capi-eks-quickstart > capi-eks-quickstart.kubeconfig kubectl get nodes --kubeconfig capi-eks-quickstart.kubeconfig
It shows No resources found in default namespace.
kubectl logs capa-controller-manager-5d487d7d68-nj46j -n capa-system
What did you expect to happen: Shows something like here.
capi-quickstart-md-0-55x6t-5649968bd7-8tq9v Ready
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The cluster status looks weird.
It always stucks here.
I tried to create ec2 endpoints to help connects but it seems no help.
Environment:
- Cluster-api-provider-aws version: v1.42
- Kubernetes version: (use
kubectl version): v1.26.0 - OS (e.g. from
/etc/os-release): Ubuntu 22.04