kilo icon indicating copy to clipboard operation
kilo copied to clipboard

Cluster with control-plane running in GKE and edge nodes behind NAT

Open rafi opened this issue 2 years ago • 1 comments

Hi, my custom cluster k8s control-plane components are running as Pods in GKE, and workers are edge servers behind a NAT. This means that there is no k8s Node that represents the "master" control-plane, only edge workers. Edge worker nodes can communicate with apiserver, but not the other way around. For example:

$ kubectl get po -n cluster-01
NAME                                       READY   STATUS
apiserver-bcc75c46f-889hj                  1/1     Running
calico-kube-controllers-65684bc956-dthwg   1/1     Running
controller-manager-57446c4f94-vm8sn        1/1     Running
etcd-0                                     1/1     Running
scheduler-587557b465-fgdvk                 1/1     Running

And when using the custom clusters' kubeconfig, for example, this shows a single edge worker node:

$ kubectl get node
NAME             STATUS   ROLES    AGE    VERSION
rafi-worker-03   Ready    <none>   7d7h   v1.23.6

My only use-case is being able to reach edge nodes from apiserver. For example, port 10250 for kubectl logs/exec. But my master node doesn't really show up in node list, so I'm a little loss as to how to setup Kilo in this scenario. Is it possible at all? 🙏

rafi avatar Aug 08 '23 14:08 rafi

Hi @rafi, thanks for raising this issue. Unfortunately, some hosted Kubernetes offerings run the control plane processes in an entirely opaque way without registering any control plane nodes. This configuration means it is currently impossible to give the control plane access to the WireGuard mesh and thus access to Kubelet addresses in remote networks or networks behind NAT, since Kilo cannot run its agent on the node and configure IP routes, VPN tunnels, etc.

squat avatar Oct 04 '23 17:10 squat