buildkit
buildkit copied to clipboard
Support different DNS resolver if host network is used
Hi,
I have a host system with systemd-resolved configured. With https://github.com/moby/moby/pull/41022 a more thorough decision concept - on whether /etc/resolv.conf
or /run/systemd/resolve/resolv.conf
should be selected - was implemented. In short, when I activate the host networking mode during build, I would expect to have /etc/resolv.conf
copied from my host into the container. The selection works fine for me for all docker commands, like run and build. I've noticed that when I activate buildkit, however, the behavior is different. It seems like /run/systemd/resolve/resolv.conf
is used always, regardless of whether the host network option is selected via --network host
.
Example
Dockerfile
FROM debian:stable-slim as test
RUN cat /etc/resolv.conf
Without Buildkit
docker build --no-cache .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM debian:stable-slim as test
---> a59bf83b71db
Step 2/2 : RUN cat /etc/resolv.conf
---> Running in e28a441f215f
# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 192.168.0.1
search fritz.box mycompany.com
Removing intermediate container e28a441f215f
---> d8068b2e8e0a
docker build --no-cache --network host .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM debian:stable-slim as test
---> a59bf83b71db
Step 2/2 : RUN cat /etc/resolv.conf
---> Running in bf6e9a9a717b
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search fritz.box mycompany.com
Removing intermediate container bf6e9a9a717b
---> 8918cfff6943
With Buildkit
DOCKER_BUILDKIT=1 docker build --no-cache --progress plain .
I expect to have /run/systemd/resolve/resolv.conf
copied, as I use the default networking mode (as above).
#5 [2/2] RUN cat /etc/resolv.conf
#5 sha256:545184e0f05d6f855b6b5907cc3e8725388186066a75e48d6a936aa232d3bfd8
#5 0.347 # This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
#5 0.347 # Do not edit.
#5 0.347 #
#5 0.347 # This file might be symlinked as /etc/resolv.conf. If you're looking at
#5 0.347 # /etc/resolv.conf and seeing this text, you have followed the symlink.
#5 0.347 #
#5 0.347 # This is a dynamic resolv.conf file for connecting local clients directly to
#5 0.347 # all known uplink DNS servers. This file lists all configured search domains.
#5 0.347 #
#5 0.347 # Third party programs should typically not access this file directly, but only
#5 0.347 # through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
#5 0.347 # different way, replace this symlink by a static file or a different symlink.
#5 0.347 #
#5 0.347 # See man:systemd-resolved.service(8) for details about the supported modes of
#5 0.347 # operation for /etc/resolv.conf.
#5 0.347
#5 0.347 nameserver 192.168.0.1
#5 0.347 nameserver fd00::3ea6:2fff:febc:9583
#5 0.347 search fritz.box mycompany.com
#5 DONE 0.4s
#6 exporting to image
#6 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#6 exporting layers 0.0s done
DOCKER_BUILDKIT=1 docker build --no-cache --progress plain --network host .
I expect to have /etc/resolv.conf
from my host copied, as I use the default networking mode host.
But instead, the same file as above was copied to the image.
#5 [2/2] RUN cat /etc/resolv.conf
#5 sha256:5c4fe41150f29d92ec2ef20be016044fcab54f17a063d0efcbade71252e99f3a
#5 0.254 # This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
#5 0.254 # Do not edit.
#5 0.254 #
#5 0.254 # This file might be symlinked as /etc/resolv.conf. If you're looking at
#5 0.254 # /etc/resolv.conf and seeing this text, you have followed the symlink.
#5 0.254 #
#5 0.254 # This is a dynamic resolv.conf file for connecting local clients directly to
#5 0.254 # all known uplink DNS servers. This file lists all configured search domains.
#5 0.254 #
#5 0.254 # Third party programs should typically not access this file directly, but only
#5 0.254 # through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
#5 0.254 # different way, replace this symlink by a static file or a different symlink.
#5 0.254 #
#5 0.254 # See man:systemd-resolved.service(8) for details about the supported modes of
#5 0.254 # operation for /etc/resolv.conf.
#5 0.254
#5 0.254 nameserver 192.168.0.1
#5 0.254 nameserver fd00::3ea6:2fff:febc:9583
#5 0.254 search fritz.box mycompany.com
#5 DONE 0.3s
#6 exporting to image
#6 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#6 exporting layers 0.0s done
I would expect the behavior to be same after enabling buildkit.
I am also having issues with --network=host and buildkit. Here is a description of my setup:
My setup
Linux machine, connected to company VPN from home.
my /etc/resolv.conf
:
domain vpn.[REDACTED_COMPANY_DOMAIN]
nameserver [REDACTED_COMPANY_NS]
nameserver [REDACTED_COMPANY_NS]
nameserver 127.0.0.53
search vpn.[REDACTED_COMPANY_DOMAIN] lan
my /run/systemd/resolve/resolv.conf
:
# This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
#
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver [REDACTED_HOME_DNS_SERVER]
search lan
Using the simple test Dockerfile:
FROM debian:bullseye-slim
RUN cat /etc/resolv.conf
Without Buildkit
$ DOCKER_BUILDKIT=0 docker build --no-cache --network=host .
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM debian:bullseye-slim
---> a36a86fb63b1
Step 2/2 : RUN cat /etc/resolv.conf
---> Running in 39867e129cb0
domain vpn.[REDACTED_COMPANY_DOMAIN]
nameserver [REDACTED_COMPANY_NS]
nameserver [REDACTED_COMPANY_NS]
nameserver 127.0.0.53
search vpn.[REDACTED_COMPANY_DOMAIN] lan
Removing intermediate container 39867e129cb0
---> 10c0815d1c13
Successfully built 10c0815d1c13
This is as expected (it's my actual /etc/resolv.conf
), and can find the hostnames in my company through the VPN
With Buildkit
$ DOCKER_BUILDKIT=1 docker build --progress=plain --no-cache --network=host .
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 90B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/debian:bullseye-slim
#3 DONE 0.0s
#4 [1/2] FROM docker.io/library/debian:bullseye-slim
#4 CACHED
#5 [2/2] RUN cat /etc/resolv.conf
#5 0.134 # This is /run/systemd/resolve/resolv.conf managed by man:systemd-resolved(8).
#5 0.134 # Do not edit.
#5 0.134 #
#5 0.134 # This file might be symlinked as /etc/resolv.conf. If you're looking at
#5 0.134 # /etc/resolv.conf and seeing this text, you have followed the symlink.
#5 0.134 #
#5 0.134 # This is a dynamic resolv.conf file for connecting local clients directly to
#5 0.134 # all known uplink DNS servers. This file lists all configured search domains.
#5 0.134 #
#5 0.134 # Third party programs should typically not access this file directly, but only
#5 0.134 # through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
#5 0.134 # different way, replace this symlink by a static file or a different symlink.
#5 0.134 #
#5 0.134 # See man:systemd-resolved.service(8) for details about the supported modes of
#5 0.134 # operation for /etc/resolv.conf.
#5 0.134
#5 0.134 nameserver [REDACTED_HOME_DNS_SERVER]
#5 0.134 search lan
#5 DONE 0.2s
#6 exporting to image
#6 exporting layers 0.1s done
#6 writing image sha256:dc301be8f3594e6d6cb683f8a58099c868c1a4901d04bc3ed7b5c5306bcbeae8 done
#6 DONE 0.1s
And I can't access any of my company hostnames, as the dns resolving fails.
I'm encountering the same regression with BuiltKit being enabled/installed by default in recent Docker versions. Our image builders do not have direct network access. Instead they have a properly configured Unbound installed locally:
On the host:
$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.0.1
With the legacy builder this worked fine with docker build --build-arg http_proxy=http://redacted-http-proxy-hostname --network=host .
+ cat /etc/resolv.conf
nameserver 127.0.0.1
+ apt-get update
Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Get:4 http://deb.debian.org/debian bullseye/main amd64 Packages [8183 kB]
Get:5 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages [243 kB]
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [14.6 kB]
Fetched 8649 kB in 2s (4579 kB/s)
Reading package lists...
With BuiltKit it pulls Google Public DNS from somewhere and that is not reachable / fails to resolve the hostname of our internal HTTP proxy:
#0 0.103 + cat /etc/resolv.conf
#0 0.105
#0 0.105 nameserver 8.8.8.8
#0 0.105 nameserver 8.8.4.4
#0 0.105 nameserver 2001:4860:4860::8888
#0 0.105 nameserver 2001:4860:4860::8844+ apt-get update
#0 0.140 Err:1 http://deb.debian.org/debian bullseye InRelease
#0 0.142 Could not resolve '*redacted-http-proxy-hostname*'
#0 0.143 Err:2 http://deb.debian.org/debian-security bullseye-security InRelease
#0 0.145 Could not resolve '*redacted-http-proxy-hostname*'
#0 0.146 Err:3 http://deb.debian.org/debian bullseye-updates InRelease
#0 0.147 Could not resolve '*redacted-http-proxy-hostname*'
#0 0.151 Reading package lists...
#0 0.164 W: Failed to fetch http://deb.debian.org/debian/dists/bullseye/InRelease Could not resolve '*redacted-http-proxy-hostname*'
#0 0.165 W: Failed to fetch http://deb.debian.org/debian-security/dists/bullseye-security/InRelease Could not resolve '*redacted-http-proxy-hostname*'
#0 0.166 W: Failed to fetch http://deb.debian.org/debian/dists/bullseye-updates/InRelease Could not resolve '*redacted-http-proxy-hostname*'
#0 0.167 W: Some index files failed to download. They have been ignored, or old ones used instead.
I've attempted to configure DNS using /etc/buildkitd.toml
, but apparently that file is ignored:
[dns]
nameservers=["1.1.1.1"] # 1.1.1.1 as a test to check if it filters 127.0.0.1 for some reason
I've also attempted to configure DNS in dockers daemon.json, but that is ignored as well.
I've also attempted to configure DNS in dockers daemon.json, but that is ignored as well.
Okay that works, but doesn't work if I use 127.0.0.1 in there. I might be able to work around that by using a different IP address pointing to the local machine that is not filtered. It's still a regression and highly unexpected, though.
I think this issue should be marked as a bug and https://github.com/moby/libnetwork/pull/2385 should probably be reverted.
Systemd is already symlinking the proper resolver config to /etc/resolv.conf
and placing itself as a stub resolver with all its bells and whistles (e.g. dynamic reconfiguration when connecting to a VPN).
Furthermore, /run/systemd/resolve/resolv.conf
is simply the wrong file. It basically bypasses resolved. The proper file in fact is /run/systemd/resolve/stub-resolv.conf
but again this is already symlinked to /etc/resolv.conf
by default.
For reference, here's the man page
Just as a side note. Using the systemd default also solves the problem that buildkit is actually not updating on resolver config changes. I realized that when I tried to redirect /run/systemd/resolve/resolv.conf
to /run/systemd/resolve/stub-resolv.conf
manually during investigation. Buildkit is still using the old config.
I hope this is some useful information ...
thx
Now that Buildx has become the default with Docker and the behavior is a breaking change I would also promote this issue to a bug.
I am also experiencing this issue on Arch Linux where my system can correctly resolve a hostname with a local DNS server but because docker buildx build
is effectively bypassing the stub resolver it is ignoring the proper DNS resolution that I have setup on my system and tries to resolve the local hostname using 1.1.1.1
.
This functionality was working until quite recently and I don't understand why this change would be made? If it's not a bug it is definitely a regression.
This functionality was working until quite recently and I don't understand why this change would be made? If it's not a bug it is definitely a regression.
Was "quite recently" since 2023? (v23 of Docker would make sense)
If it was with a recent version of buildkit you could try using an earlier release.
You probably want to use --network host
(which I think requires --allow
?). That should avoid using an internal DNS resolver? Not sure if this is different when using the docker-container
driver to do the build, you might need to configure the network option for that instead?
You can also try setting the DNS to use in /etc/daemon.json
?
Was "quite recently" since 2023? (v23 of Docker would make sense)
"Quite recently" as in a couple of weeks ago. We recently transitioned our local registry to HTTPS and we removed all of the insecure registry stuff from our buildx instance and the resolution of domain names using our local DNS server worked fine.
I'm not familiar with the extent of what insecure entitlements and network.host
means but it seems excessive to allow the building of a docker image full access to the host network environment just to allow a local DNS server to be accessed. Is there really no other option?
You can also try https://github.com/moby/moby/issues/5779#issuecomment-478518282?
When I sorted out the insecure registries to allow buildx to access our old HTTP registry I was told that buildx doesn't really pay much attention to /etc/docker/daemon.json
, has this changed? Or is this one of the settings that buildx actually looks at?
"Quite recently" as in a couple of weeks ago.
There's been no new releases of Buildx since July: https://github.com/docker/buildx/releases/tag/v0.11.2
v0.11.2
release notes refer to BuildKit dependency bump from a June commit to a July commit (why it's not pinning BuildKit release versions anymore I don't know) which seems roughly to be v0.12.0
of Buildkit (July release).
So to clarify, by "quite recently" you're saying it was working fine in August? There is two point releases with small changes for BuildKit, with v0.12.2
being within that few weeks window, but Buildx shouldn't be using that by default? What's your docker buildx ls
/ docker buildx inspect
?
Are you sure it's an issue with BuildKit, or something that changed with Buildx? What version was working before? Buildx can create with older releases for docker-container
driver IIRC, if it's really due to a recent change you should be able to confirm by rolling the version back through that.
Likewise, with moby
/ Docker releases, it doesn't look like anything has been released recently (except for a point release yesterday).
Maybe another change involved elsewhere is contributing to the problem? 🤷♂️
it seems excessive to allow the building of a docker image full access to the host network environment just to allow a local DNS server to be accessed. Is there really no other option?
/etc/docker/daemon.json
to configure the DNS?
I was told that buildx doesn't really pay much attention to
/etc/docker/daemon.json
, has this changed? Or is this one of the settings that buildx actually looks at?
No clue, I'm just a user like yourself.
Pretty simple to verify by giving a try though? Only potential inconvenience is that the DNS change isn't scoped to your buildx builds (or specific builds), so having an arg option to provide docker buildx build
a DNS override only when relevant would be better.
My system package for buildx updated from 0.11.0 to 0.11.2 on 2023-08-10. I can't remember the specific date that I last tested the HTTPS registry, but its possible it was around then.
I'm not specifically saying that it is a buildx/docker issue, but I am seeing the exact same symptoms that are described earlier in this issue. The /etc/resolv.conf
inside the container is the same as /run/systemd/resolve/resolv.conf
on the host which, as is mentioned earlier, is the wrong file to be using as it completely bypasses systemd-resolved
.
Having a look at /run/systemd/resolve/resolv.conf
now and I am seeing that the ordering of the nameservers is different to what it was yesterday when I made my comment, so it seems like systemd-networkd
/systemd-resolved
is populating that file differently each time, so this would make this file an unreliable source of nameservers.
TL;DR: I don't see anything in release notes that suggests the breakage you experienced was introduced recently.
I'm not specifically saying that it is a buildx/docker issue, but I am seeing the exact same symptoms that are described earlier in this issue
That's ok, it'd be good to confirm if rolling back to buildx 0.11.0
does reproduce how it was before, or if you're actually affected by something much older regardless due to introducing some other change (like the registry transition).
Updates and related dependencies changes
My system package for buildx updated from 0.11.0 to 0.11.2 on 2023-08-10.
Thanks!
So buildx 0.11.0 (June 2023) was using an 0.11.0 RC version of buildkit (from Dec 2022), and buildx 0.11.2 (July 2023) is using buildkit 0.12.1 (noted as a dev release though), with proper 0.12.1 release in Aug 2023.
BuildKit has had quite a bit of changes since then, but nothing stood out as related in the release notes. While buildx 0.11.1 mentions fixing a networking bug with --add-host
for the special host-gateway
address.
We can also observe between the buildx versions that Docker / moby
was bumped from 24.0.2
to 24.0.5
and there were some networking changes there too but nothing that stands out.
The release notes might not be highlighting these kinds of changes, for example there's plenty of DNS activity on moby/moby
Identifying the cause
You'd have to look deeper if it really was a change from these dependencies that changed some behaviour. Obviously it'd help to verify that rolling back your version of buildx does resolve the issue, or if it still occurs then something else has actually introduced it (making the much older history discussed in this issue relevant).
I'm not saying that this isn't a bug, as an earlier comment above draws attention to why it is, but they reference reverting a PR from 2019. That bug wouldn't be newly introduced by buildx in the past few weeks.
Update:
- The 2019 PR referenced, has a comment stating to check if
/etc/resolv.conf
is a symlink to/run/systemd/resolve/stub-resolv.conf
or contains127.0.0.53
as the DNS server, which the author responds with a comment justifying the approach agreed on (acknowledging it was contrary to thesystemd-resolved
docs advice): - Someone later chimed in with the same concern expressed about using
/run/systemd/resolve/resolv.conf
, stating queries should be sent to127.0.0.53
, where a maintainer responds noting why that's not always possible (default docker bridge can't for legacy reasons, proposed change to address that): https://github.com/moby/libnetwork/pull/2385#issuecomment-505446007 - This led to improvement in 2020 PR to integrate with
systemd-resolved
better when the networking conditions required to do so are met: https://github.com/moby/moby/pull/41022
Getting it resolved upstream
I've been pushing to get a change through with the systemd .service
config for another bug (which wasn't originally an issue until systemd made it one with a change from v240 in 2018Q4, and Go temporarily made worse indirectly in Aug 2022 but since resolved upstream).
It took quite a while to get that change approved and merged into moby
, but I'm still doing the same for containerd
and that was with substantial evidence to justify that the change / fix was warranted. The issue had been known for a while prior and causing problems for years.
The maintainers get a heaps of notifications due to all the activity they're subscribed to, if anyone is interested in seeing a fix my advice is to:
- Open an issue about it in the correct repo (
moby/moby
for thissystemd-resolved
network handling I think?) - Detail the problem, provide a minimal reproduction example and suggested fix (doesn't have to be code, but should clearly communicate why it's a fix)
- Optionally pushing ahead with a PR (or wait for a response to greenlight that), and reach out to the maintainers via Slack (this seems the best way to get their attention and discussion going in my experience).
Update: Given my follow-up comment, it seems at least for @Bidski who I assume is using docker-container
driver and not the host network, to raise the issue upstream to Buildx. There's been activity/discussion around passing the DNS config over to BuildKit, and a stale PR that may be relevant to better integration (network host focused for docker-container
).
This is a bit messy and repetitive, sorry about that. I've split it off as a separate part of my prior comment. Spent too much time connecting it all together, only to get many of the same answers relayed to me just as I was finishing up my response 😆
Follow-up Update
I didn't read through prior comments which are a bit more informative 😅
Original author from Oct 2021 of this issue was specifically pointing out that using BuildKit with --network host
during an image build doesn't behave like the referenced moby/moby
2020 PR change, which improved upon the moby/libnetwork
2019 PR (this repo is no longer used with newer releases of Docker). Equivalent BuildKit PR (June 2019).
The 2020 PR seems to have addressed the concern, and notes the following behaviour:
- Default bridge network (while default for compatibility, now considered legacy), no embedded DNS is available, and
systemd-resolved
is bypassed apparently, upstream DNS servers is configured for the DNS container. - Custom network attached to container with embedded DNS will copy
/etc/resolv.conf
from host to container, and embedded DNS forwards external DNS lookups tosystemd-resolved
. - Host mode network, uses
/etc/resolv.conf
, no embedded DNS involved, container runs in hosts networking namespace instead of it's own.systemd-resolved
gets used. Apparently not in sync with host after container starts though.
v23 of Docker enabled BuildKit by default IIRC.
- The 2020
moby/moby
PR still has that logic here: https://github.com/moby/moby/blob/075a2d89b96ca2c31a61ce3b05214bbe2ba49af8/daemon/container_operations_unix.go#L382-L428 - It calls some
libnetwork
methods, originally in the 2020 PR it was thedocker/libnetwork
repo but nowdocker/docker/libnetwork
(akamoby/moby/libnetwork
).- Before: https://github.com/moby/libnetwork/blob/67e0588f1ddfaf2faf4c8cae8b7ea2876434d91c/sandbox.go#L1092-L1098
- Now: https://github.com/moby/moby/blob/075a2d89b96ca2c31a61ce3b05214bbe2ba49af8/libnetwork/sandbox_options.go#L65-L71
-
Buildx using BuildKit for similar
systemd-resolved
support in June 2019 (BuildKit getssystemd-resolved
support): https://github.com/moby/buildkit/pull/1033- BuildKit follow-up PR (June 2019) that let's Docker daemon DNS config be used for
resolvconf
(related, Aug 2022 request for docs for how to use this withbuildkitd.toml
, I've raised a PR to address that): https://github.com/moby/buildkit/pull/1040 - BuildKit
resolvconf.go
brings indocker/docker/libnetwork/resolvconf
and aliasesPath()
(originally introduced after the June 2019 PR in Oct 2019 to support testing, changed fromGet()
toPath()
in Oct 2022). -
This Oct 2022 PR notes that BuildKit is only parsing
resolv.conf
if it was not configured with aDNSConfig
(which thebuildkitd.toml
can handle, or Buildx if it doesn't already?). Review note by the maintainer also notes that thePath()
usage (whenDNSConfig
is not provided to override) seems incorrect in BuildKit (if custom networks are used) as they should be made to also supportsystemd-resolved
properly here too (improvements that themoby/moby
2020 PR introduced).
- BuildKit follow-up PR (June 2019) that let's Docker daemon DNS config be used for
Unrelated to running with host mode networking (possible insight):
This later comment one notes that they were able to change DNS via /etc/docker/daemon.json
successfully, but it had to avoid running DNS via 127.0.0.1
(this possibly is due to iptables
rules with bridge networking, which IIRC uses 127.0.0.1/8
for userland-proxy: true
which is default). 127.0.0.1/8
would affect systemd-resolved
right?
If it's due to userland-proxy
, you could try disabling that (I think it was to support routing localhost to the docker-proxy
process for port mapping TCP/UDP, thus getting in the way of your own DNS resolver maybe?).
userland-proxy: false
will use a linux kernel feature without applying the iptables
rule needed to support localhost routing to containers (NAT to containers port on private range IP IIRC), but only for IPv4 (that feature isn't available for IPv6 [::1]
which would be successful with userland-proxy: true
).
Update: For dockerd
if you set DNS via the daemon.json
and it's 127.0.0.53
Docker may try "to be helpful" as noted in this 2020 bug report.
- As comments share, that causes issues sourcing the DNS servers to query that way when you're relying on features like split DNS though.
- A workaround was shared to configure
systemd-resolved
to listen on a different IP and configure Docker for that instead which would bypass that specialized detection: https://github.com/docker/for-linux/issues/979#issuecomment-897678768
Possible relevant group of issues and stale PR (--network=host
for BuildKit for local DNS):
- https://github.com/docker/buildx/issues/1347
- https://github.com/moby/buildkit/issues/3210
- https://github.com/moby/buildkit/pull/3244
Related Oct 2022 issue with registry by @Bidski : https://github.com/docker/buildx/issues/1370#issuecomment-1288516840
- Comment from issue stating
docker-container
supportsnetwork
parameter but DNS settings aren't respected (references "relevant group" buildkit issue above): https://github.com/docker/buildx/issues/1370#issuecomment-1290254074 - Comment that
/etc/docker/daemon.json
is not properly propagated through todocker-container
builders, onlybuildkit.toml
?: https://github.com/docker/buildx/issues/1370#issuecomment-1308479499
So BuildKit does not implement the same systemd-resolved
2020 improvement that moby
/ Docker has, only the original 2019 integration. Buildx could supposedly work around that which seems to be tracked here, or the user presently can via the buildkitd.toml
file if configuring the DNS section (Buildx maintainer in July 2019 also states Buildx needs to configure BuildKit for external DNS).
- Presently BuildKit has it's
resolvconf.go
callingPath()
frommoby/moby/libnetwork/resolvconf/resolvconf.go
with the 2019systemd-resolved
support (alternatePath
used if/etc/resolv.conf
contains single127.0.0.53
name server) - You would like this user-defined network handling that forwards DNS to
systemd-resolved
- Host-mode networking should work fine, but
docker-container
driver for Buildx must configure BuildKit correctly for that (either way the behaviour for the Docker daemon was to copy/etc/resolv.conf
into the container, thus it won't update to changes to the file if they occurred it seems). - The
default
Docker bridge network for legacy reasons forsystemd-resolved
isn't as effective and copies from/run/systemd/resolve/resolv.conf
for/etc/resolv.conf
since it lacks the capability to do better until breaking away from legacy support. - I don't know where this logic would live in BuildKit, or if they'd accept it since. The DNSConfig override approach may be expected instead from Buildx which would have better context on what's appropriate config.
- Host-mode networking should work fine, but
https://github.com/moby/buildkit/issues/2404#issue-1019927031 sums it up nicely, thanks, @Ka0o0. Also worth pointing out is that docker build
and docker run
behaves inconsistently right now (because buildkit is enabled by default in newer Docker releases). docker run --network=host
does use your nameserver 127.0.0.53
(systemd-resolved
) DNS if you have it enabled, whereas docker build
does not. This provides a slightly confusing end-user experience. We should aim at getting this fixed, for the sake of providing a better UX.
Has this been resolved in #4524?
UPDATE: The reason I'm asking is that I still see this issue, and it has become very unpleasant.