docs
docs copied to clipboard
No info on IPv6 NAT on IPv6 networking page
Is this a docs issue?
- [X] My issue is about the documentation content or website
Type of issue
I can't find what I'm looking for
Description
I can't find any info on IPv6 NAT on the IPv6 networking page. As a start, it would be nice if the page made it more clear what networking model is even used (although reading between the lines, I assume it's subnet delegation without a NAT). Then, it would be great if there was a really prominent warning that this is very different to IPv4 and may break many containers that rely on NAT separation and/or have really major security pitfalls with expose no longer working, and how to enable IPv6 NAT instead for these use cases which I heard docker supports now.
Location
https://docs.docker.com/config/daemon/ipv6/
Suggestion
See above. Info on IPv6 NAT setup missing, and also the info on why and when it should be used (safe use of expose and for containers that expect this setup).
Nevermind, I'm guessing ip6tables actually is the NAT? Nevertheless, would be great if this could be made more clear in the text.
I'm guessing ip6tables actually is the NAT
Yes, that will match the IPv4 experience with a bridge that each container will get an IPv6 address from and be bound to an interface on the host with a public IPv6 address. This is using ULA addresses.
IPv6 can avoid the NAT without host mode networking by assigning a GUA address too, but there is more gotchas there.
I have written similar docs if the official IPv6 Docker docs aren't helpful for you? Let me know if they explain anything more clearly for you: https://github.com/moby/moby/issues/40275#issuecomment-1704701907
services:
your_service_here:
networks:
- custom-ipv6-net
# IPV6 ULA `/64` subnet, private range for NAT similar to IPv4 private range subnets:
# Use any IPv6 `/64` prefix in this range: `fd00:xxxx:yyyy:zzzz::/64`
networks:
custom-ipv6-net:
enable_ipv6: true
ipam:
config:
- subnet: fd00:cafe:face:feed::/64
prominent warning that this is very different to IPv4 and may break many containers that rely on NAT separation and/or have really major security pitfalls with expose no longer working
One caveat to be aware of is that userland-proxy: true has been the default for quite some time in /etc/docker/daemon.json, and while it will eventually be disabled, it does introduce some risks with IPv6 when enabled like it is.
- IPv6 capable hosts can connect to the host at a port and be routed to an IPv4 only container for that same port IIRC.
- This is Docker trying to be helpful, but the source address is lost from the original IPv6 client, and becomes the IPv4 gateway of the docker bridge network it was routed through.
- If you have any software that trusts the subnet (eg: Postfix), or monitors IP addresses (eg Fail2Ban), these may behave a bit surprisingly by trusting foreign clients or banning all IPv6 clients due to that proxying effect on the source address.
userland-proxyis generally useful for a host to reach a container overlocalhost/127.0.0.1+ mapped port to a container, there is no equivalent of this for IPv6 (the kernel does not support an equivalent feature). I recall something similar to this for IPv4 (unrelated to IPv6) allowing a host on the local network to adjust their routing table to connect to private containers they should otherwise not have access to. UFW could not prevent it IIRC, but FirewallD could.
So unless you need it, you might want to disable userland-proxy, but that is not without caveats. There are some changes in networking behaviour between containers themselves or between the host which may break for you. Most of which could be resolved with some tweaking of the iptables rules, but I lost this information 😅
Also if you were not aware, when you map a port from a container so that it can be reached via another IP/interface, I recall this ignores any firewall rules you may have with UFW/firewalld that would have otherwise denied that. Not much that can be done about that AFAIK as both the firewall frontends and Docker are manipulating the same networking rules.
There hasn't been any activity on this issue for a long time.
If the problem is still relevant, mark the issue as fresh with a /remove-lifecycle stale comment.
If not, this issue will be closed in 14 days. This helps our maintainers focus on the active issues.
Prevent issues from auto-closing with a /lifecycle frozen comment.
/lifecycle stale
Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
/lifecycle locked