liqo icon indicating copy to clipboard operation
liqo copied to clipboard

Cannot set the gateway IPv4 on multiple nodes

Open IceManGreen opened this issue 11 months ago • 1 comments

What happened:

I deployed a K3S cluster on 3 nodes :

  • master on 172.16.100.90
  • worker1 on 172.16.100.91
  • worker1 on 172.16.100.92

Also, note that I have 2 interfaces on each node :

  • control plane on enp1s0 172.16.100.0/22
  • data plane on enp2s0 172.16.110.0/22

So basically, I installed the K3S control plane on enp1s0 for each node.

Now I only want to deploy the gateway load balancer service on enp2s0 for the data plane but all the other components on enp1s0.

So I tried to install Liqo with the LoadBalancer service type of gateway, on the enp2s0 interface of the master node :

liqoctl install k3s --cluster-name domain-1 \
    --api-server-url https://172.16.100.90:6443 \
    --set "gateway.service.loadBalancer.ip=172.16.110.90,gateway.service.type=LoadBalancer,gateway.config.listeningPort=5873" \
    --context domain-1

Also note that I used the solution of the issue #2370 to make it work.

However, the liqo gateway load balancer services keeps the IPv4 addresses of enp0s1 and not enp0s2 :

$ kubectl get svc -n liqo --context domain-1
NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP                                 PORT(S)          AGE
liqo-auth                 NodePort       10.43.49.77     <none>                                      443:30795/TCP    18h
liqo-controller-manager   ClusterIP      10.43.208.84    <none>                                      9443/TCP         18h
liqo-gateway              LoadBalancer   10.43.124.235   172.16.100.90,172.16.100.91,172.16.100.92   5873:30316/UDP   47m
liqo-metric-agent         ClusterIP      10.43.237.220   <none>                                      443/TCP          18h
liqo-network-manager      ClusterIP      10.43.29.22     <none>                                      6000/TCP         18h
liqo-proxy                ClusterIP      10.43.250.48    <none>

Questions :

  • the EXTERNAL-IP of svc/liqo-gateway should be on 172.16.110.90,172.16.110.91,172.16.110.92 with my install config right ?
  • the installation config with --set "gateway.service.loadBalancer.ip=172.16.110.90 suggests that we only set one IPv4. Here I set the IPv4 of the K3S master node for the data plane, but what about the other nodes ?

What you expected to happen:

I expect that the load balancer services of the gateway are up with the addresses equal to 172.16.110.90,172.16.110.91,172.16.110.92 so that the wireguard overlays communicate through enp2s0 for each node.

How to reproduce it (as minimally and precisely as possible):

  1. Install a K3S cluster with multiple nodes
  2. Install liqo with
liqoctl install k3s --cluster-name domain-1 \
    --api-server-url <master-control-plane-url> \
    --set "gateway.service.loadBalancer.ip=<master-data-plane-url>,gateway.service.type=LoadBalancer,gateway.config.listeningPort=5873" \
    --context domain-1
  1. Use the issue #2370 to make the deployment work.

Environment:

  • Liqoctl version:
Client version: v0.9.4
Server version: v0.10.1
  • Kubernetes version (use kubectl version):
Client Version: v1.29.2
Server Version: v1.28.6+k3s2
  • Cloud provider or hardware configuration: Linux hypervisor, the K3S cluster is hosted on virtual machines deployed on QEMU/KVM with libvirt :
Compiled against library: libvirt 6.0.0
Using library: libvirt 6.0.0
Using API: QEMU 6.0.0
Running hypervisor: QEMU 4.2.1
  • Node image:
$ uname -a
Linux controller 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux

IceManGreen avatar Mar 05 '24 09:03 IceManGreen

Hi @IceManGreen,

the EXTERNAL-IP of svc/liqo-gateway should be on 172.16.110.90,172.16.110.91,172.16.110.92 with my install config right ?

--set "gateway.service.loadBalancer.ip=172.16.110.90 sets the loadBalancerIP field inthe Service Spec. It is possible that your LoadBalancer provider does not support this field and so it is ignored, as the official documentation says:

This field will be ignored if the cloud-provider does not support the feature.

Also, even if supported, that feature is used to set a static LoadBalancer IP, so I don't expect it to automatically add 172.16.110.91 and 172.16.110.92 IPs.

fra98 avatar Mar 05 '24 10:03 fra98

Hi @fra98,

I made a little research and indeed this usecase won't work on K3S as it is. K3S uses ServiceLB as the LoadBalancer provider. According to the K3S documentation about the LoadBalancer behavior :

If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's status.loadBalancer.ingress address list. Otherwise, the node's internal IP is used.

In this case, ServiceLB will never consider extra interfaces to populates IPv4 in its address list. It takes the private IP or the external IP configured by the admin, but no more than that.

To anyone looking for a solution in the future : use MetalLB instead of ServiceLB in K3S alongside Liqo.

To do this, you will have to install K3S while disabling ServiceLB with --disable servicelb. Next, install MetalLB in your cluster as the LoadBalancer provider. You will have to configure MetalLB address pool with your node's external IPv4 by applying this in your cluster :

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: cheap
  namespace: metallb-system
spec:
  # example with the addresses of my own cluster nodes
  # these addresses are the expected IPs of my nodes on the data plane
  # they will be used by liqo to establish the VPNs connexions with the gateway
  addresses:
  - 172.16.110.90/32
  - 172.16.110.91/32
  - 172.16.110.92/32

Finally, install Liqo with :

liqoctl install k3s --cluster-name domain-1 \
    --api-server-url <master-control-plane-url> \
    --set "gateway.service.type=LoadBalancer" \
    --context domain-1

In this context, Liqo will automatically take one of the provided IPv4 in the MetalLB address pool (configured above) to create the LoadBalancer service of the gateway.

IceManGreen avatar Mar 06 '24 09:03 IceManGreen

It seems like a nice solution, thanks for sharing. Closing this issue.

fra98 avatar Mar 06 '24 10:03 fra98