k3d
k3d copied to clipboard
k3d pod can reach company private address by ip, but it is not resolving company private address using fqn.
I'm trying to figure out what went wrong. The set-up is :
laptop -> nat + bridged VMware VM (multi-homed) -> docker k3d nodes -> pod
and laptop has an openconnect VPN connection to company; VPN started after k3d create
A ubuntu pod can reach company VPN as well as local networks using IP. See below.
root@ubuntu:/# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=62 time=6.46 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=62 time=5.33 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=62 time=3.44 ms
^C
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 3.437/5.076/6.462/1.247 ms
root@ubuntu:/# ping 172.20.30.106
PING 172.20.30.106 (172.20.30.106) 56(84) bytes of data.
64 bytes from 172.20.30.106: icmp_seq=1 ttl=126 time=14.1 ms
64 bytes from 172.20.30.106: icmp_seq=2 ttl=126 time=17.3 ms
64 bytes from 172.20.30.106: icmp_seq=3 ttl=126 time=16.3 ms
k3d host nodes (aka docker container) can reach both using fqn;
[rockylinux@rockylinux8 ~]$ docker exec -it 0d1 sh
/ # cat /etc/resolv.conf
search localdomain lan linuxvmimages.local
nameserver 127.0.0.11
options ndots:0
/ # ping bayes-air.lan
PING bayes-air.lan (192.168.1.151): 56 data bytes
64 bytes from 192.168.1.151: seq=0 ttl=63 time=11.122 ms
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 14.796/24.031/60.120 ms
/ # ping xxxx.yyy.edu
PING xxx.yyy.edu (172.20.30.106): 56 data bytes
64 bytes from 172.20.30.106: seq=0 ttl=127 time=14.678 ms
[rockylinux@rockylinux8 ~]$ kubectl get configmaps -n kube-system coredns -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
import /etc/coredns/custom/*.server
NodeHosts: |
172.18.0.3 k3d-gitops-server-0
kind: ConfigMap
So the forward . /etc/resolve.conf doesn't seem to work properly?
So given that the k3d host nodes are able to resolve, the docker's dns service is working fine:
[rockylinux@rockylinux8 ~]$ docker exec -it k3d-gitops-server-0 sh
/ # cat /etc/resolv.conf
search localdomain lan linuxvmimages.local
nameserver 127.0.0.11
options ndots:0
I believe k3s will check host's resolv.conf and if it has any issue that it doesn't like, k3s will use 8.8.8.8 instead. pods can't reach 127.0.0.11, correct? But k3s should have already made adjustments, no? How do I check that's what happened?
I saw #209. It is a VERY long thread! It would be tremendously helpful if the documentation of k3d can summarize what solutions work, and what is recommended. Still reading through it.
- patch coredns configmap (runtime). #seems to work for me.
KUBE_EDITOR="sed -i 's|forward.*|forward . 192.168.117.1 192.168.117.2|'" kubectl edit -n kube-system cm coredns - patch coredns deploy time
- patch host iptables to forward any 8.8.8.8 to host dns server
- k3d volume mount host /etc/resolv.conf or /run/systemd/resolve/resolv.conf
- export K3D_FIX_DNS=1 && k3d cluster create test #works for me, but initial ping has a delay?
Hi @bayeslearner , sorry for only getting to look into this today.
As I see over in #209 you got it working after all, which is great!
Is this still an issue for you then or can we close this?
K§D_FIX_DNS is supposed to be set by default in an upcoming version of k3d.