libnetwork
libnetwork copied to clipboard
Proposal: Need options to disable embedded DNS
I want to use default bridge and macvlan network in k8s. However, when container connect to macvlan network, DNS of default bridge network will be changed by macvlan network:
Start container with default bridge network:
[root@kube-node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9bed187828f sshd:1.0 "/usr/sbin/sshd -D" 11 minutes ago Up 11 minutes k8s_sshd-1.aea60a3a_sshd-1_default_33a61753-fc72-11e5-9520-525460110101_85c3614c
127973c47c43 gcr.io/google_containers/pause:2.0 "/pause" 11 minutes ago Up 11 minutes k8s_POD.6059dfa2_sshd-1_default_33a61753-fc72-11e5-9520-525460110101_94dbf418
[root@kube-node1 ~]# docker exec e9bed187828f cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.254.0.10
options ndots:5
options ndots:5
[root@kube-node1 ~]# docker exec e9bed187828f nslookup kubernetes.default
Server: 10.254.0.10
Address: 10.254.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.254.0.1
Connect container to macvlan network:
[root@kube-node1 ~]# docker network create -d macvlan --subnet=10.10.10.0/24 --gateway=10.10.10.1 -o parent=eth0 pub_net
056f952e74668afcce1f9f2d9543e847f562da0d044862775b0e660c85b9f744
[root@kube-node1 ~]# docker network connect --ip="10.10.10.100" pub_net 127973c47c43
/etc/resolv.conf will be changed:
[root@kube-node1 ~]# docker exec e9bed187828f cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local svc.cluster.local cluster.local
nameserver 127.0.0.11
options ndots:5 ndots:0
[root@kube-node1 ~]# docker exec e9bed187828f nslookup kubernetes.default
;; connection timed out; trying next origin
Server: 127.0.0.11
Address: 127.0.0.11#53
Name: kubernetes.default.svc.cluster.local
Address: 10.254.0.1
This will make every DNS query to 127.0.0.11
at first, and connection timeout, then to 10.254.0.10
. Make DNS query to be slowly.
;; connection timed out; trying next origin
@mrjana @mavenugo @thockin @brendandburns
Refer to #19474
@hustcat can you share the docker version
details ?
I couldn't quite understand the reason for connection timeout
. cc @sanimej
1.11.0-dev with experiment:
#docker --version
Docker version 1.11.0-dev, build 901c67a-unsupported, experimental
@mavenugo timeout is because the name can't be resolved in the docker domain.
@hustcat Normally what we recommend is to pass external DNS servers through the --dns option in the docker run. Embedded DNS server will forward the queries that it can't resolve to the configured servers.
@sanimej yea,10.254.0.10
is the external DNS, and embedded DNS server has forwarded the query. But this make DNS query become inefficiency and slowly. I think disable it is better for me.
And more, I don't want port that nothing to do with the application is listened in container, and this will confuse application developer.
[root@kube-node1 ~]# docker exec e9bed187828f netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:45723 0.0.0.0:* LISTEN -
tcp 0 0 :::22 :::* LISTEN -
udp 0 0 127.0.0.11:42323 0.0.0.0:* -
and iptables rules:
[root@kube-node1 ~]# docker exec --privileged e9bed187828f iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3 packets, 210 bytes)
pkts bytes target prot opt in out source destination
24 1732 DNAT udp -- * * 0.0.0.0/0 127.0.0.11 udp dpt:53 to:127.0.0.11:42323
0 0 DNAT tcp -- * * 0.0.0.0/0 127.0.0.11 tcp dpt:53 to:127.0.0.11:45723
Chain POSTROUTING (policy ACCEPT 27 packets, 1942 bytes)
pkts bytes target prot opt in out source destination
0 0 SNAT udp -- * * 127.0.0.11 0.0.0.0/0 udp spt:42323 to::53
0 0 SNAT tcp -- * * 127.0.0.11 0.0.0.0/0 tcp spt:45723 to::53
I wish container is clean.
I'm in favour of this. I'm running DNS authoritatives and resolvers inside containers, and I'm sure many others do. This prevents me doing that instead of resorting to port redirections. 53 shouldn't be treated any differently from 22, 80 or 443, or at least there should be an option to disable this.
Builtin DNS server falls under load. Disable it or make it reliable.
Embedded DNS loose part of upstream responses giving DNS timeouts for client apps inside container.
I can't find 100% reproducible sequence, but this happens quite often on one of our hosts (Server Version: 1.13.0
, Kernel Version: 4.4.0-47-generic
, Operating System: Ubuntu 16.04.1 LTS
).
This happens only for containers in custom network, when Docker uses 127.0.0.11
resolver.
I start while true; do date; ping -w 1 s3.eu-central-1.amazonaws.com; echo sleep; sleep 1; done;
in containers and can see periods of ping: unknown host
– this happens to all containers simultaneously, after a minute or so dns responses start to arrive again.
During this strange periods I can see outgoing udp packets with DNS requests with tcpdump inside the container, responses from upstream with tcpdump on upstream, but no udp packets with DNS responses inside container!
If I replace 127.0.0.11
resolver with upstream IP-address, everything works fine.
FWIW, I'm running into issues with embedded DNS on a host that uses nftables instead of iptables (iptables are disabled because that conflicts with nftables' dnat). It just doesn't work, plain and simple, for obvious reasons. While being able to disable eDNS won't solve service discovery problem, I could work around that.
Any news ? I have the same problem with nftable too.
I want to use my own DNS servers, no inside-docker name doohickey wanted.
Just old simple standardized DNS that isn't being messed with.
For now my containers have to run a startup script of
echo nameserver <my actual nameserver> > /etc/resolv.conf
because of this.
We use own patch to do that
diff --git a/components/engine/container/container.go b/components/engine/container/container.go
index 11814b7..1206e7b 100644
--- a/components/engine/container/container.go
+++ b/components/engine/container/container.go
@@ -793,9 +793,12 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
createOptions = append(createOptions, libnetwork.CreateOptionService(svcCfg.Name, svcCfg.ID, net.ParseIP(vip), portConfigs, svcCfg.Aliases[n.ID()]))
}
- if !containertypes.NetworkMode(n.Name()).IsUserDefined() {
- createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
- }
+ // if !containertypes.NetworkMode(n.Name()).IsUserDefined() {
+ // createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
+ // }
+
+ // Always disable embedded dns server
+ createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
I second this. My beef is that I want full control of the nat table for the container. The assumptions I have to make to allow docker to set the necessary nat rules seems unnecessary.
My use case for this is using the macvlan
network driver together with nftables
.
Even though I've set "iptables": false
eDNS still tries (unsuccessfully) to set up NAT rules and then proceeds to mangle /etc/resolv.conf
, breaking all container DNS resolution in the process, and forcing me to run a patched docker with eDNS disabled.
While a switch to disable eDNS would be best, I'd at least like to see eDNS disabled if NATting is impossible. Disabling eDNS when "iptables":false
is set would be a good start (until nftables is fully supported).
I had the same issue with nftables. Traefik would fail with
dial tcp: lookup acme-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:45118->127.0.0.11:53: read: connection refused"
Obviously there's no such thing as 127.0.0.11 on my system - I use static IPs with static src/dstnat rules. The solution that also works with static IPs is very simple and doesn't need recompiling or changing dockerfiles. Put this in docker-compose:
volumes:
- /opt/data/resolv.conf:/etc/resolv.conf:ro
where /opt/data/resolv.conf has the correct DNS servers (8.8.8.8) :-) Now everything runs as it should.
Same problem here (openvswitch network driver, nftables with iptables disabled in modprobe config, own resolver/discovery service). The patch above helps, but I still don't understand why not make it optional the same way as DisableGatewayService works.
Add me to the list of people who really need to be able to disable the embedded DNS server. In my case, it's because it doesn't handle PTR queries correctly.
In my case, ssh to remote docker container is so slow , we still need sshd inside this docker because of the legacy application depends heavily on ssh command. You could only use /etc/host and disabled domain name resolving via dns in /etc/nsswitch.conf , change it from hosts: files dns
to hosts: files
. it will ignore /etc/resolv.conf
sed -i 's/^hosts:.*/hosts: files/' /etc/nsswitch.conf
Found this and for me that is still an issue.
This is also affecting me...