for-mac
for-mac copied to clipboard
Docker v4.31.0 - host.docker.internal resolves to an IPv6 address in an unreachable Network
Description
I'm running the following Docker Compose setup:
https://github.com/jo-tools/docker/blob/main/local-cubesql-volumes/docker-compose.yml
This runs a Database Server (cubeSQL) and a web administration tool. The setup is preconfigured, so that one can just click 'Connect' in the Web Admin to successfully connect:
- Hostname:
host.docker.internal - Port:
4430
No Issues with Docker v.4.30.0 and earlier
However, with Docker v4.31.0 the connect can't be established any longer.
Interestingly, changing the Host from host.docker.internal to the effective Host's IP (e.g. 192.168.1.x) works to establish a connection.
So something has changed in v4.31.0 which causes issues with Network Connections within/between Containers.
Reproduce
- Use this
docker-compose.yml:
https://github.com/jo-tools/docker/blob/main/local-cubesql-volumes/docker-compose.yml docker-compose up -d- Open the Web Admin Tool in the Browser:
http://localhost:4431 - Push the Button
Connect
Expected behavior
Connection to the Database Server via host.docker.internal on Port 4430 can be established.
Actual Behavior:
- works with Docker v4.30.0
- connection errors with Docker v4.31.0 (via
host.docker.internal:4430) - connection errors with Docker v4.31.0 (via
cubesql:4430), which should also work since that's the hostname of the Network that the two containers are both part of. - works with Docker v4.31.0 only when connecting via Host's real/effective IP (e.g.
192.168.1.x)
docker version
Client:
Version: 26.1.4
API version: 1.45
Go version: go1.21.11
Git commit: 5650f9b
Built: Wed Jun 5 11:26:02 2024
OS/Arch: darwin/amd64
Context: desktop-linux
Server: Docker Desktop 4.31.0 (153195)
Engine:
Version: 26.1.4
API version: 1.45 (minimum version 1.24)
Go version: go1.21.11
Git commit: de5c9cf
Built: Wed Jun 5 11:29:22 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.33
GitCommit: d2d58213f83a351ca8f528a95fbd145f5654e957
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info
Client:
Version: 26.1.4
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.14.1-desktop.1
Path: /Users/juerg/.docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.27.1-desktop.1
Path: /Users/juerg/.docker/cli-plugins/docker-compose
debug: Get a shell into any image or container (Docker Inc.)
Version: 0.0.32
Path: /Users/juerg/.docker/cli-plugins/docker-debug
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.2
Path: /Users/juerg/.docker/cli-plugins/docker-dev
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.24
Path: /Users/juerg/.docker/cli-plugins/docker-extension
feedback: Provide feedback, right in your terminal! (Docker Inc.)
Version: v1.0.5
Path: /Users/juerg/.docker/cli-plugins/docker-feedback
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v1.2.0
Path: /Users/juerg/.docker/cli-plugins/docker-init
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /Users/juerg/.docker/cli-plugins/docker-sbom
scout: Docker Scout (Docker Inc.)
Version: v1.9.3
Path: /Users/juerg/.docker/cli-plugins/docker-scout
Server:
Containers: 8
Running: 2
Paused: 0
Stopped: 6
Images: 8
Server Version: 26.1.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
runc version: v1.1.12-0-g51d5e94
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 6.6.31-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.755GiB
Name: docker-desktop
ID: dd331a20-17bd-4cf5-a2da-7ae5f90b26f6
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Labels:
com.docker.desktop.address=unix:///Users/juerg/Library/Containers/com.docker.docker/Data/docker-cli.sock
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false
Diagnostics ID
D8B99358-A952-49AC-A00D-0CC40DA51EB0/20240616195445
Additional Info
- macOS Sonoma 14.5
- MacBook Pro 16-inch, 2019
- Processor: 2.6 GHz 6-Core Intel Core i7
This might be related to Issue https://github.com/docker/for-mac/issues/7324
We have a similar issue. host.docker.internal now resolves to an IPv6 address.
Confirm with, for example: docker run --rm alpine:latest getent hosts host.docker.internal
We have a similar issue.
host.docker.internalnow resolves to an IPv6 address.
Interesting...
ping uses the IPv4 address, and it can't ping the IPv6 address:
ping: connect: Network is unreachable
# getent hosts host.docker.internal
fdc4:f303:9324::254 host.docker.internal
# nslookup -query=AAAA host.docker.internal
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: host.docker.internal
Address: fdc4:f303:9324::254
# nslookup -query=A host.docker.internal
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: host.docker.internal
Address: 192.168.65.254
# ping -c 2 host.docker.internal
PING host.docker.internal (192.168.65.254) 56(84) bytes of data.
64 bytes from 192.168.65.254 (192.168.65.254): icmp_seq=1 ttl=63 time=0.285 ms
64 bytes from 192.168.65.254 (192.168.65.254): icmp_seq=2 ttl=63 time=0.702 ms
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.285/0.493/0.702/0.208 ms
# ping -c 2 192.168.65.254
PING 192.168.65.254 (192.168.65.254) 56(84) bytes of data.
64 bytes from 192.168.65.254: icmp_seq=1 ttl=63 time=0.304 ms
64 bytes from 192.168.65.254: icmp_seq=2 ttl=63 time=0.345 ms
--- 192.168.65.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1043ms
rtt min/avg/max/mdev = 0.304/0.324/0.345/0.020 ms
# ping -c 2 fdc4:f303:9324::254
ping: connect: Network is unreachable
# ping -c 2 host.docker.internal -6
ping: connect: Network is unreachable
# ping -c 2 ip6-localhost
PING ip6-localhost(localhost (::1)) 56 data bytes
64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.041 ms
--- ip6-localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1046ms
rtt min/avg/max/mdev = 0.025/0.033/0.041/0.008 ms
So it seems the issue is that with Docker v4.31.0 host.docker.internal resolves to an IPv6 address, which is in an unreachable network...
...so any service trying to (preferrably) use the IPv6 address won't work any longer (unless it uses the working IPv4 address as a fallback)
Thank you for documenting this issue. Several on my team have encountered it when upgrading.
My team is also stuck on 4.30 because of it.
Note: The Docker Image used in the original post (which lead to discover this isse) has been updated with a fix for this issue.
If someone wants to reproduce using the originally posted steps, then the mentioned docker-compose.yml needs to be changed to use the previous/affected docker image:
Reproduce
- Use this
docker-compose.yml: https://github.com/jo-tools/docker/blob/main/local-cubesql-volumes/docker-compose.yml
editdocker-compose.ymland change
from:image: jotools/cubesql-webadmin(the 'latest' is updated with a fix for this issue) to:image: jotools/cubesql-webadmin:1.0.0(use the affected image which works with Docker < v4.31.0, but fails to connect with Docker v4.31.0)docker-compose up -d- Open the Web Admin Tool in the Browser:
http://localhost:4431- Push the Button
Connect
Anyway - I don't think it's necessary to use that docker compose setup to reproduce this issue.
The other replies show in more detail what the underlying issue is.
Similar issue here. An nginx container resolves host.docker.internal to an IPv6 address and then cannot reach it.
@jo-tools what did you change in the cubesql-webadmin image to make it work on 4.31 ?
@jo-tools what did you change in the cubesql-webadmin image to make it work on 4.31 ?
I've fixed the client connector (*).
If the hostname resolves to both IPv4 and IPv6 addresses, it'll try to connect to both and uses the first successful one (which in Docker v4.31 will be the IPv4 address, since the IPv6 address can't be reached).
The bug in the client connector had been that it aborted with every error before (e.g. because of the IPv6 address not reachable), and didn't try the other resolved address instead. The good thing with this (now fixed/improved) client connector bug has been that this Docker issue has been discovered ;)
Edit: (*) this means: fixed the service running inside the docker container to cope with hostnames resolving to both IPv4 and IPv6 addresses. And should one of the two not work, it'll fall back and use the other one.
I may have missed it, but is there a config or something to work around this? We just hit it with our local Envoy containers refusing to connect to other services running in Compose. Downgrade seems to be the only straightforward option, but if there's another way, that would be great.
Making our app server and dependencies work with IPv6 would take a decent amount of work, given that work would all be individually container specific.
(I'm seeing this on Windows Docker Desktop 4.32 as well.)
Version info
$ docker version
Client:
Version: 27.0.3
API version: 1.46
Go version: go1.21.11
Git commit: 7d4bcd8
Built: Sat Jun 29 00:01:25 2024
OS/Arch: linux/amd64
Context: default
Server: Docker Desktop 4.32.0 (157355)
Engine:
Version: 27.0.3
API version: 1.46 (minimum version 1.24)
Go version: go1.21.11
Git commit: 662f78c
Built: Sat Jun 29 00:02:50 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.18
GitCommit: ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
runc:
Version: 1.7.18
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
The release notes for 4.33.0 listed a bug fix for this issue, but I'm still seeing IPv6 addresses assigned to host.docker.internal with that version.
Version info
% docker version
Client:
Version: 27.1.1
API version: 1.46
Go version: go1.21.12
Git commit: 6312585
Built: Tue Jul 23 19:54:12 2024
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.33.0 (160616)
Engine:
Version: 27.1.1
API version: 1.46 (minimum version 1.24)
Go version: go1.21.12
Git commit: cc13f95
Built: Tue Jul 23 19:57:14 2024
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.19
GitCommit: 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
runc:
Version: 1.7.19
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
We're also still seeing this in 4.33.
Rolling back to 4.30 seems to work.
Version Info:
Client:
Version: 27.1.1
API version: 1.46
Go version: go1.21.12
Git commit: 6312585
Built: Tue Jul 23 19:54:12 2024
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.33.0 (160616)
Engine:
Version: 27.1.1
API version: 1.46 (minimum version 1.24)
Go version: go1.21.12
Git commit: cc13f95
Built: Tue Jul 23 19:57:14 2024
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.19
GitCommit: 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
runc:
Version: 1.7.19
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
FYI, gateway.docker.internal seems to have the same problem on 4.33. Falling back to 4.30 works for that hostname as well.
Paraphrasing some stuff from a conversation with support:
These is a new setting planned in an upcoming release (called IPv4 only, under Settings > Resources > Network). I was able to test it on a dev build and it fixed this issue. They are hopeful it will be included in 4.34.0 but there's a possibility it slips to 4.35.0 🤞
Looks like they weren't able to include it in 4.34.0.
I'm also having this issue. Would really appreciate a fix. Thanks.
I'm also having this issue - would be really great to have it fixed. Thank you
+1. Would really appreciate for a fix. Thanks.
Paraphrasing some stuff from a conversation with support:
These is a new setting planned in an upcoming release (called
IPv4 only, under Settings > Resources > Network). I was able to test it on a dev build and it fixed this issue. They are hopeful it will be included in4.34.0but there's a possibility it slips to4.35.0🤞
Not seeing this option after updating to 4.35, also didn't spot anything in the release notes
Version info
% docker version
Client:
Version: 27.3.1
API version: 1.47
Go version: go1.22.7
Git commit: ce12230
Built: Fri Sep 20 11:38:18 2024
OS/Arch: darwin/arm64
Context: desktop-linux
Server: Docker Desktop 4.35.0 (172550)
Engine:
Version: 27.3.1
API version: 1.47 (minimum version 1.24)
Go version: go1.22.7
Git commit: 41ca978
Built: Fri Sep 20 11:41:19 2024
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.7.21
GitCommit: 472731909fa34bd7bc9c087e4c27943f9835f111
runc:
Version: 1.1.13
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Please try to include this in the coming version, some of us are really counting on it for our day to day work, and are actually stuck in version 4.28.0
I did discover that the (new) settings config includes IPv4Only and IPv6Only, both set to false. If you change IPv4Only to true and do a full docker restart, it'll use IPv4 for host.docker.internal, so it appears the setting was release, just not the UI to enable/disable it. 😅
I did discover that the (new) settings config includes
IPv4OnlyandIPv6Only, both set to false. If you changeIPv4Onlyto true and do a full docker restart, it'll use IPv4 for host.docker.internal, so it appears the setting was release, just not the UI to enable/disable it. 😅
@tlbraams Not sure what you mean, I didn't see anything in the settings config. Are you saying that in the latest version we can change the configuration, just not through the UI? I tried that and didn't succeed, if you have done it, can you please tell us how?
What worked for me on macOS 14.7.1 with Docker 4.35.0:
- Quit Docker Desktop
- Edit
~/Library/Group Containers/group.com.docker/settings-store.json(the path given in the doc that @tlbraams linked) and changeIPv4Onlytotrue. Make sureIPv6Onlyisfalse. - Start Docker Desktop
:warning: EDIT May 2025: This doesn't seem to work anymore in 4.41.x.
I did discover that the (new) settings config includes
IPv4OnlyandIPv6Only, both set to false. If you changeIPv4Onlyto true and do a full docker restart, it'll use IPv4 for host.docker.internal, so it appears the setting was release, just not the UI to enable/disable it. 😅@tlbraams Not sure what you mean, I didn't see anything in the settings config. Are you saying that in the latest version we can change the configuration, just not through the UI? I tried that and didn't succeed, if you have done it, can you please tell us how?
Did you perhaps edit settings.json? as acj mentions, docker now (4.35.0+) uses the settings-store.json file. Those steps are indeed what I intended to indicate.
The issue is still present in 4.35.1 (173168).
I had both the settings.json and the settings-store.json present, made sure that I have both set accordingly:
"IPv4Only": true, "IPv6Only": false
That did the trick. Thank you @acj!
This issue is still present in 4.36.0 (175267).
I found I only modified the settings-store.json.
Thank you @acj!
Is this still required on 4.39.0? Would prefer to not have to instruct any one using docker at work to do this patch.
This is still happening in v4.40.0... very confusing behaviour when devs upgrade their docker and host names become unreachable