k3s
k3s copied to clipboard
Exposing the cluster and services only trough a specific network?
Hello, i am trying to setup my nodes to expose only trough one network the machines have access the internet and other computes trough 192.168.1.0/24 the nodes have a second network adapter on 192.168.210.0/24 that are isolated from NON ks3 machines
i am trying to expose the k3s cluster only in the isolated network. so, only people in the isolated network can see them I.E. my personal computer shouldn't access to it since it doesn't live in the 210 network
i tried to configure the k3s cluster with all the flags i could find like --bind-address --advertise-address --node-ip --node-external-ip however it seems i am still being able to access the cluster's services if i access the machines directly trough it's addresses can this be done at the k3s level with no config to the host os?
$ sudo k3s kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME cluster-server-0 Ready control-plane,master 16h v1.24.4+k3s1 192.168.210.20 192.168.210.20 Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.6-k3s1 cluster-agent-1 Ready <none> 14h v1.24.4+k3s1 192.168.210.21 192.168.210.21 Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.6-k3s1
for example i tried to deploy rancher and i am still being able to access it's UI interface trough the machine Real ip (192.168.1.64)
to be clear i want to being able to only acess both the cluster and the exposed services trough the isolated network I did setup the cluster with the default load balancer flannel and traefik, does anyone have any hint about where is the issue located? i have no idea if i should blame flannel, traefik or the klipper load balancer
try setting kube-proxy:
when installing ks3, add arg --kube-proxy-arg="--nodeport-addresses="192.168.100.131/32""
https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
@migs35323
I'm in the middle of doing the same thing (exposing k3s on only one network on a multi-homed machine). I've gotten farther than you in terms of exposing the services on only one IP address, but I'm having trouble with kubectl logs
which I'll describe separately.
I have two networks, let's call them 123.x.x.x/24, setup with wireguard, and 234.x.x.x/24, setup with regular networking.
k3s is being configured on the wireguard network, since it's private, and I won't want the api server (typically on port 6443) or the kubelet (on port 10250) or other services exposed on the 234.x.x.x network.
Here is what I have so far:
INSTALL_K3S_SKIP_DOWNLOAD=true /opt/k3s/install.sh \
-v=2 \
--kube-apiserver-arg=v=2 \
--bind-address 123.x.x.x \
--advertise-address 123.x.x.x \
--node-ip 123.x.x.x \
--kubelet-arg="address=123.x.x.x" \
--disable servicelb \
--disable traefik
bind-address
sets where the api server listens to 6443 on, what kubectl talks to.
node-ip
and kubelet-arg=address
relate to the IP that kubelet
listens on.
I disable servicelb
because I plan to use metallb
instead.
Disabling traefik
prevents two NodePorts from being advertised on 0.0.0.0 on the host for the Traefik ingress controller.
All of this works - I can create deployments, services, and configure MetalLB IP addresses on either 123.x.x.x/24 or 234.x.x.x/24 and connect to services through them.
I just can't use kubectl logs
lol
Error from server: Get "https://123.x.x.x:10250/containerLogs/kube-system/metrics-server-7b67f64457-56rtn/metrics-server": proxy error from 123.x.x.x:6443 while dialing 123.x.x.x:10250, code 503: 503 Service Unavailable
So:
- See if the above options help you get k3s to only listen on your restricted network
- Double-check that
kubectl logs
works in your configuration
K3s uses a tunnel from the agent to the server that the apiserver then uses to establish reverse connections for logs and other server-agent connections. If you mess with the kubelet's addresses such that the node's addresses (as listed in kubectl get node
) don't align with the kubelet's actual listen address, that won't work. You can start k3s server
with the --debug
flag flag to log some additional information about these connections, but as long as the private IPs agree you should be fine.
@brandond
Wow that's a great suggestion I wish I had a few hours ago, lmao
Added --debug
in addition to all the flavors of v=9
I already had in k3s.service
none of which showed that it's actually trying to connect to 127.0.0.1:10250, not 123.x.x.x:10250:
That has an interesting line:
Dec 28 01:13:02 server1 k3s[44246]: I1228 01:13:02.457241 44246 round_trippers.go:466] curl -v -XGET 'https://123.x.x.x:10250/containerLogs/kube-system/local-path-provisioner-5d56847996-dj6g8/local-path-provisioner'
Dec 28 01:13:02 server1 k3s[44246]: I1228 01:13:02.457451 44246 round_trippers.go:510] HTTP Trace: Dial to tcp:123.x.x.x:6443 succeed
Dec 28 01:13:02 server1 k3s[44246]: time="2022-12-28T01:13:02Z" level=debug msg="Tunnel server handing HTTP/1.1 CONNECT request for //123.x.x.x:10250 from 123.x.x.x:33094"
Dec 28 01:13:02 server1 k3s[44246]: time="2022-12-28T01:13:02Z" level=debug msg="Tunnel server egress proxy dialing 127.0.0.1:10250 directly"
So now to figure out how to get the 'tunnel' to not try to use 127.0.0.1
.
My situation feels like a bug, but I'm not ready to plant that flag yet.
k3s Server Config - Networking
Notice the comments for --egress-selector-mode
talk about the loopback
address. In my case, kubelet
is intentionally not listening on that address.
disabled
as a value for this flag looks interesting, but not sure of the consequences of it beyond a literal reading of the description of that setting.
--egress-selector-mode disabled
fixed my problem with kubectl logs
In that mode the agent's reverse tunnel to the server is completely unused, so you must ensure that the addresses are routable and ports are opened. In the default mode, the kubelet listens on the loopback adapter (actuall, on the wildcard address), and the server uses the tunnel to connect back to it at the loopback address, without requiring the port to be exposed externally. It is however used by other things like the metrics-server pod and other monitoring tools to access.
Have you tried using --kubelet-arg=address=127.0.0.1
instead of binding it to a specific interface address?
In that mode the agent's reverse tunnel to the server is completely unused, so you must ensure that the addresses are routable and ports are opened. In the default mode, the kubelet listens on the loopback adapter (actuall, on the wildcard address), and the server uses the tunnel to connect back to it at the loopback address, without requiring the port to be exposed externally. It is however used by other things like the metrics-server pod and other monitoring tools to access.
Have you tried using
--kubelet-arg=address=127.0.0.1
instead of binding it to a specific interface address?
No, but I could at a later time when I rebuild the cluster for the nth time :)
What really threw me off course was the misleading message from kubectl and from the k3s logs themselves reporting being unable to connect to the node-ip when what it was actually trying to do was connect to loopback.
The apiserver attempts to connect to the kubelet's address, via the tunnel proxy connection that uses a remotedialer connection to the loopback address on the node being connected to. It's our implementation of https://github.com/kubernetes-sigs/apiserver-network-proxy#readme. Because the apiserver only knows about the attempt to connect to the kubelet's address, that's what you get errors about if it fails.