pihole-kubernetes icon indicating copy to clipboard operation
pihole-kubernetes copied to clipboard

Dual stack on k3s with Klipper load balancer

Open anon-software opened this issue 10 months ago • 7 comments

I cannot get dual stack configuration to work on k3s with servicelb (Klipper) to work. Here is the values file I used:

DNS1: 192.168.2.254
adminPassword: admin
dualStack:
  enabled: true
persistentVolumeClaim:
  enabled: true
  size: 100Mi
resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi
serviceDns:
  externalTrafficPolicy: Cluster
  type: LoadBalancer
serviceWeb:
  externalTrafficPolicy: Cluster
  http:
    port: "2080"
  https:
    port: "2443"
  type: LoadBalancer

This apparently creates two services, one for ipv4 and one for ipv6.

$ sudo kubectl get svc -n pihole
NAME                  TYPE           CLUSTER-IP           EXTERNAL-IP                                               PORT(S)                         AGE
pihole-dhcp           NodePort       10.43.225.145        <none>                                                    67:31286/UDP                    14m
pihole-dns-tcp        LoadBalancer   10.43.125.101        192.168.2.235,192.168.2.240,192.168.2.241,192.168.2.242   53:30891/TCP                    14m
pihole-dns-tcp-ipv6   LoadBalancer   2001:cafe:43::cde8   <pending>                                                 53:30509/TCP                    14m
pihole-dns-udp        LoadBalancer   10.43.227.133        192.168.2.235,192.168.2.240,192.168.2.241,192.168.2.242   53:30797/UDP                    14m
pihole-dns-udp-ipv6   LoadBalancer   2001:cafe:43::603a   <pending>                                                 53:31366/UDP                    14m
pihole-web            LoadBalancer   10.43.138.225        192.168.2.235,192.168.2.240,192.168.2.241,192.168.2.242   2080:31393/TCP,2443:31467/TCP   14m
pihole-web-ipv6       LoadBalancer   2001:cafe:43::661c   <pending>                                                 2080:31095/TCP,2443:32206/TCP   14m

As you can see above, only one service gets an external IP address. In this case it is always ipv4, but I think I have had ipv6 get the external IP address on occasion.

Klipper creates corresponding pods for these services.

$ sudo kubectl get pod -n kube-system|grep pihole
svclb-pihole-dns-tcp-eb679da4-55l7g        1/1     Running     0          18m
svclb-pihole-dns-tcp-eb679da4-5phpx        1/1     Running     0          18m
svclb-pihole-dns-tcp-eb679da4-clhp5        1/1     Running     0          18m
svclb-pihole-dns-tcp-eb679da4-fn7p7        1/1     Running     0          18m
svclb-pihole-dns-tcp-ipv6-1930b28c-46tfl   0/1     Pending     0          18m
svclb-pihole-dns-tcp-ipv6-1930b28c-dx456   0/1     Pending     0          18m
svclb-pihole-dns-tcp-ipv6-1930b28c-gtjtf   0/1     Pending     0          18m
svclb-pihole-dns-tcp-ipv6-1930b28c-r644b   0/1     Pending     0          18m
svclb-pihole-dns-udp-def81466-cc7c4        1/1     Running     0          18m
svclb-pihole-dns-udp-def81466-hbktg        1/1     Running     0          18m
svclb-pihole-dns-udp-def81466-mx5dr        1/1     Running     0          18m
svclb-pihole-dns-udp-def81466-sff7z        1/1     Running     0          18m
svclb-pihole-dns-udp-ipv6-7586bc32-5gl2q   0/1     Pending     0          18m
svclb-pihole-dns-udp-ipv6-7586bc32-cb7wn   0/1     Pending     0          18m
svclb-pihole-dns-udp-ipv6-7586bc32-dqm9l   0/1     Pending     0          18m
svclb-pihole-dns-udp-ipv6-7586bc32-qdq4v   0/1     Pending     0          18m
svclb-pihole-web-38f1c6a9-bxzkg            2/2     Running     0          18m
svclb-pihole-web-38f1c6a9-hn9tt            2/2     Running     0          18m
svclb-pihole-web-38f1c6a9-q26hp            2/2     Running     0          18m
svclb-pihole-web-38f1c6a9-w4h9q            2/2     Running     0          18m
svclb-pihole-web-ipv6-9b288549-4dkgq       0/2     Pending     0          18m
svclb-pihole-web-ipv6-9b288549-cs7zz       0/2     Pending     0          18m
svclb-pihole-web-ipv6-9b288549-jktgp       0/2     Pending     0          18m
svclb-pihole-web-ipv6-9b288549-rj4d8       0/2     Pending     0          18m

Drilling further into one of these pending pods, I can see this problem.

$ sudo kubectl describe pod -n kube-system svclb-pihole-web-ipv6-9b288549-4dkgq
[snip]
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  20m                  default-scheduler  0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  4m21s (x3 over 14m)  default-scheduler  0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 Preemption is not helpful for scheduling.

I think if we can have one service configured for dual stack, we would not have this problem.

anon-software avatar Jan 17 '25 23:01 anon-software

I found a related pull request, https://github.com/MoJo2600/pihole-kubernetes/pull/202. After briefly testing it, it appears to have fixed my problem. It would be nice if you can merge it. In the meantime I shall use my forked repository.

anon-software avatar Jan 20 '25 18:01 anon-software

For what its worth, i found a workaround, until the pull request gets merged. Using kustomize, we can add a delete patch, to remove the single stack ipv6 service.

# delete-ipv6-service.yaml
$patch: delete
apiVersion: v1
kind: Service
metadata:
  name: pihole-dns-ipv6
---
# kustomize.yaml
patches:
  - path: delete-ipv6-service.yaml

Using this with rancher fleet succesfully removed the resource.

mrsteakhouse avatar Jan 24 '25 10:01 mrsteakhouse

I do not understand how that can help. Currently, the template service-dns-udp.yaml conditions the required code for the IPv4 part like this:

  {{- if and (.Values.dualStack.enabled) (not (eq .Values.serviceDns.type "LoadBalancer")) }}
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  {{- end }}

So, if you choose to have the load balancer and remove the IPv6 service using kustomize , the remaining service will support only IPv4. On the other side, the template for the pull request changes that condition to the following:

  {{- if and (.Values.dualStack.enabled) (or (not (eq .Values.serviceDns.type "LoadBalancer")) (.Values.dualStack.loadBalancer)) }}
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ...
 {{- end }}

The above actually adds dual stack support in one service.

As a side note, the name for the new parameter is not self-explanatory at all. But it is just a name, so any one will do the trick.

anon-software avatar Jan 25 '25 04:01 anon-software

How you manage to run so many pods? Because pihole is using SQLite of database, so there will data inconsistency ?

electropolis avatar Jan 29 '25 16:01 electropolis

I use etcd, not SQLite. See https://docs.k3s.io/datastore

anon-software avatar Jan 29 '25 17:01 anon-software

But you didn't understand my question. I'm not talking about database for k3s but I'm talking about Pihole app that runs on Kubernetes. The app uses local SQLite. So it's not possible to run another pod with own SQLite -- classic data inconsistency

electropolis avatar Jan 29 '25 17:01 electropolis

Sorry, I see what you mean now. These are not the "worker" pods, if I can call them that way, that you see above. These pods created by Klipper in the "kube-system" name space provide the load balancing. There are four nodes in the cluster and there is one pod for each service type and node. They exist just to route the request to the single pihole pod:

$ sudo kubectl get pod -n pihole
NAME                      READY   STATUS    RESTARTS   AGE
pihole-77f7bd48b9-qqzgq   1/1     Running   0          4d22h

anon-software avatar Jan 29 '25 18:01 anon-software