docker-ipv6nat icon indicating copy to clipboard operation
docker-ipv6nat copied to clipboard

Possible deprecation of docker-ipv6nat

Open robbertkl opened this issue 5 years ago • 53 comments

With the merge of https://github.com/moby/libnetwork/pull/2572 we're finally 1 step closer to having IPv6 NAT built into Docker!

I'm creating this issue to track the release of this feature, and to figure out if there are any remaining use cases for this tool. If not, we can deprecate this tool in favor of the built-in functionality.

robbertkl avatar Nov 30 '20 14:11 robbertkl

@robbertkl I think we should keep it up until built-in IPv6 NAT is rolled out for most distributions. In addition to this, it is required to check if built-in IPv6 NAT behaves the same way docker-ipv6nat does. :wink:

bephinix avatar Nov 30 '20 21:11 bephinix

Exactly, agree 100%! I wanted to use this issue to share findings on behavior of built-in IPv6 NAT. After confirming this tool is no longer needed, I wanted to deprecate it with a README message, but still keep it available until the built-in IPv6 NAT is widespread.

robbertkl avatar Nov 30 '20 21:11 robbertkl

We should also track moby/moby#41622 because this is the requirement to enable the IPv6 NAT in the docker daemon.

Many thanks also for the great work on this project, it has made my work with IPv6 and docker much easier.

bboehmke avatar Dec 01 '20 07:12 bboehmke

Docker 20.10 with IPv6 NAT is out but it has some serious issues: https://github.com/moby/moby/issues/41774

J0WI avatar Dec 10 '20 16:12 J0WI

I was actually coming here to open a ticket about this very thing. :)

The latest stable update on Manjaro included Docker 20.10, and I saw the new ipv6nat functionality--and read the long thread of people trying to figure out exactly how it should work, here: https://github.com/moby/moby/pull/41622

It sounds like it's very much still experimental? I'm not sure how to check whether a feature is considered experimental or not?

In the meantime, if we've been using docker-ipv6nat without issue, can we just continue as we were, or will the new built-in tools break it? I'd prefer not to switch until it's had at least a few months for the most critical bugs to be worked out.

(It's also amazing to me--in a good way--that the official Docker release is implementing IPv6 NAT after months/years of philosophical pushback about that NAT'ing IPv6 being Wrong®. Maybe it is in most contexts, but it's clearly the best way to go in Docker, given how seamless v4 NAT'ing is with containers.

Thanks for all your work on this. I could never have used IPv6 before this point on docker without your work. :)

johntdavis84 avatar Jan 01 '21 20:01 johntdavis84

I have no intentions of pulling the plug until we can all agree Docker offers the same functionality (and stability). Of course, I'll be hesitant to add new features to docker-ipv6nat when it might be deprecated "soonish". We're keeping an eye on the development within Docker, and currently have no reason to think it will break docker-ipv6nat if you keep it disabled. Thanks for the support @johntdavis84 !

robbertkl avatar Jan 01 '21 20:01 robbertkl

Finally with release 20.10.2 the upstream IPv6 NAT seems to work now.

If you want to give it a try simply add the following lines to the /etc/docker/daemon.json:

{
  "experimental": true,
  "ip6tables": true
}

and configure the IPv6 the same way as for this container (see https://github.com/robbertkl/docker-ipv6nat#docker-ipv6-configuration)

Note: The ipv6nat container should not be running if ip6tables in docker daemon is enabled

bboehmke avatar Jan 05 '21 12:01 bboehmke

There's a regression in 20.10.2: https://github.com/moby/moby/issues/41858 https://github.com/moby/libnetwork/issues/2607

J0WI avatar Jan 06 '21 17:01 J0WI

I’m very pleased to see how aggressively this is being developed/bugs are being squished. If this is going to be part of docker’s core functionality, it needs to be rock solid.

It’s especially nice to see given the previous resistance from some of the docker community to incorporating NAT-based IPv6.

Robbert, are they collaborating with you at all or drawing from your codebase, or did they roll this from scratch?

  • JTD.

On Jan 6, 2021, at 11:07 AM, J0WI [email protected] wrote:

There's a regression in 20.10.2: moby/moby#41858 https://github.com/moby/moby/issues/41858 moby/libnetwork#2607 https://github.com/moby/libnetwork/issues/2607 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/robbertkl/docker-ipv6nat/issues/65#issuecomment-755432076, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGI5CYVQ3WI35USCHL5PQJTSYSKFTANCNFSM4UHVYYOQ.

johntdavis84 avatar Jan 06 '21 18:01 johntdavis84

No collaborating, I think they rolled it from scratch. That makes most sense, as they can mirror the internal workings of the IPv4 NAT. Docker-ipv6nat is set up as an external listener, so doesn't make much sense to draw from this codebase.

I agree that it seems they're very much on top of things. Since the decision was made to make it part of Docker, they're taking it seriously.

robbertkl avatar Jan 06 '21 18:01 robbertkl

That makes sense. Thanks for taking the time to explain how these things work. I’m a relative Linux/networking newbie, and I feel like I’m starting to get a handle on the basics, but the intricacies of IPv6 especially, and how it is implemented across various systems, remains impenetrable deep magic.

I understand why IPv6 is so important, but I am deeply concerned that it’s so difficult to use compared to IPv4. That’s fine for commercial/professional deployments, but the current tools available do not seem anywhere near as accessible to home networking prosumers as the IPv4 tool stack is.

Docker-ipv6nat is one of the few “let’s make this easier” v6 tools I’ve found.

		- JTD.

On Jan 6, 2021, at 12:41 PM, Robbert Klarenbeek [email protected] wrote:

No collaborating, I think they rolled it from scratch. That makes most sense, as they can mirror the internal workings of the IPv4 NAT. Docker-ipv6nat is set up as an external listener, so doesn't make much sense to draw from this codebase.

I agree that it seems they're very much on top of things. Since the decision was made to make it part of Docker, they're taking it seriously.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/robbertkl/docker-ipv6nat/issues/65#issuecomment-755498393, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGI5CYVGKJEVDMVKH33SDHDSYSVHBANCNFSM4UHVYYOQ.

johntdavis84 avatar Jan 06 '21 18:01 johntdavis84

Has anyone tried enabling IPv6 NAT for the default bridge network? In my case dockerd tries to execute a wrong command and crashes. Reported it here: https://github.com/moby/moby/issues/41861

fnkr avatar Jan 06 '21 21:01 fnkr

Hi all,

With Docker 20.10.6 the ipv6nat function is fully intergrated (experimental). You can add the following flags to your daemon.json: { "ipv6": true, "fixed-cidr-v6": "fd00::/80", "experimental": true, "ip6tables": true }

thedejavunl avatar Apr 13 '21 08:04 thedejavunl

Hi all,

With Docker 20.10.6 the ipv6nat function is fully intergrated (experimental). You can add the following flags to your daemon.json: { "ipv6": true, "fixed-cidr-v6": "fd00::/80", "experimental": true, "ip6tables": true }

Thanks for the update. How does this compare to the earlier updates that enabled/tweaked IPv6 NAT? Is it considered feature complete now/lacking known bugs?

I found this in the release notes:

Networking Fix a regression in docker 20.10, causing IPv6 addresses no longer to be bound by default when mapping ports moby/moby#42205 Fix implicit IPv6 port-mappings not included in API response. Before docker 20.10, published ports were accessible through both IPv4 and IPv6 by default, but the API only included information about the IPv4 (0.0.0.0) mapping moby/moby#42205 Fix a regression in docker 20.10, causing the docker-proxy to not be terminated in all cases moby/moby#42205 Fix iptables forwarding rules not being cleaned up upon container removal moby/moby#42205

johntdavis84 avatar Apr 13 '21 17:04 johntdavis84

The docker versions between 20.10.2 and 20.10.6 had some regressions with the user land proxy. This issues are now solved and the daemon should work exactly as before with disabled ip6tables.

Until now there are no know bugs for the IPv6 handling anymore. (At least non that I am aware of).

I already used version 20.10.2 in a semi productive setup without any issues (with disabled user land proxy).

bboehmke avatar Apr 13 '21 17:04 bboehmke

Thanks for the info.

I have a VM running Manjaro I can test this in once it’s available there...

On Apr 13, 2021, at 12:30 PM, Benjamin Böhmke @.***> wrote:

The docker versions between 20.10.2 and 20.10.6 had some regressions with the user land proxy. This issues are now solved and the daemon should work exactly as before with disabled ip6tables.

Until now there are no know bugs for the IPv6 handling anymore. (At least non that I am aware of).

I already used version 20.10.2 in a semi productive setup without any issues (with disabled user land proxy).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/robbertkl/docker-ipv6nat/issues/65#issuecomment-818915009, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGI5CYWQQWIVFIGWK7C4CTDTIR5TNANCNFSM4UHVYYOQ.

johntdavis84 avatar Apr 13 '21 18:04 johntdavis84

I can confirm that Docker 20.10.6's ipv6nat implementation works, and it seems to work exactly like how this container was doing it. The only difference I have seen is that the docker ps command now shows that the ports are mapped for both IPv4 and IPv6. The downside being that "experimental" mode needs to be turned on.

Rycieos avatar Apr 14 '21 20:04 Rycieos

Let's keep this issue open until NAT for IPv6 is available in upstream docker without experimental mode. :+1:

bephinix avatar Apr 14 '21 20:04 bephinix

now (20.10.7), I am using this experimental feature with docker-compose and it work perfectly!

chesskuo avatar Jun 26 '21 02:06 chesskuo

@chesskuo How do I make this work with docker-compose stacks (which use custom bridge networks)? My containers only get IPv4 addresses unless I use the default bridge network.

fnkr avatar Jul 20 '21 13:07 fnkr

How do I make this work with docker-compose stacks (which use custom bridge networks)? My containers only get IPv4 addresses unless I use the default bridge network.

You need to define an IPv6 subnet for the network:

networks:
  network:
    driver: bridge
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd00:abcd:ef12:1::/64
        - subnet: 10.1.0.0/16

Rycieos avatar Jul 20 '21 13:07 Rycieos

If you want to make the network persistent, so that it exists all the time (even when the container is not running), you can use the docker network create command to do the same thing.

This is useful if you have a number of containers that need to use the same network (e.g., if you’re running NGINX Reverse Proxy Manager in Container A and need to run a reverse proxy’d service in container B).*

*This might not be The One True Way® to do this, but it works.


John T Davis

On Jul 20, 2021, at 8:40 AM, Mark Vander Stel @.***> wrote:

How do I make this work with docker-compose stacks (which use custom bridge networks)? My containers only get IPv4 addresses unless I use the default bridge network.

You need to define an IPv6 subnet for the network:

networks: network: driver: bridge enable_ipv6: true ipam: config: - subnet: fd00:abcd:ef12:1::/64 - subnet: 10.1.0.0/16 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/robbertkl/docker-ipv6nat/issues/65#issuecomment-883401632, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGI5CYU2SAXDMK32NAKJYI3TYV4EFANCNFSM4UHVYYOQ.

johntdavis84 avatar Jul 20 '21 14:07 johntdavis84

@fnkr

this is my network part of docker-compose.yml:

networks:
  traefik:
    name: traefik
    attachable: true
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.100.0.0/24
          gateway: 172.100.0.254
        - subnet: fd00:dead:beef::/112
          gateway: fd00:dead:beef::254

chesskuo avatar Jul 20 '21 15:07 chesskuo

Thanks. Unfortunately this means we'll have to deal with IP addresses in docker-compose.yaml. We would prefer if Docker would automatically assign IPv6 subnets to networks, like it does for IPv4.

For now, we only need IPv6 in CI (for outbound connections to IPv6-only servers), so we'll just connect all containers to the default bridge network to make it work:

for container in $(docker ps -q -f "label=com.docker.compose.project.working_dir=${PWD}"); do docker network connect bridge "$container"; done

fnkr avatar Jul 20 '21 15:07 fnkr

I found out that linuxserver/wireguard:

  • Works in "vanilla" Docker
  • Works when using docker-ipv6nat
  • Does not work when using "experimental": true, "ip6tables": true

It seems that theres some difference in how the 2 implementations manipulate iptables, and yours seem to integrate better with wireguard.

RaphMad avatar Aug 01 '21 12:08 RaphMad

Just as a follow-up to my comment above, I found the problem to be the default policy set on the FORWARD CHAIN, which was set to DROP therefore rendering all routing useless. The value set is inconsistent between yours and the experimental docker implementation (as well as between IPv4 and IPv6):

  • docker-ipv6nat: Chain FORWARD (policy ACCEPT...
  • docker with experimental/ip6tables: Chain FORWARD (policy DROP...

Interestingly, the default FORWARD policy for IPv4 is set to ACCEPT also by docker (contrary to whats stated in this doc: https://docs.docker.com/network/iptables/#docker-on-a-router).

TLDR:

If your Docker host is also doing routing, jumping from docker-ipv6nat to the experimental implementation may break routing. Whether this intended or not is hard to tell, as even the IPv4 doc seems inconsistent in that regard.

As a fix I run this script when my network becomes active, but you may as well simply change the default FORWARD policy to accept. I also noticed that the DOCKER-USER chain is currently not created in the experimental IPv6 implementation, as I found out it was somewhat an agreed standard for IPv4 I also added it for IPv6 and did my ACCEPTs in there (note your interfaces may be different, and depending on your routing use-case you might even want completely different ACCEPTs):

ip6tables -N DOCKER-USER
ip6tables -I FORWARD -j DOCKER-USER
ip6tables -A DOCKER-USER -j RETURN
ip6tables -I DOCKER-USER -i wg0 -o enp0s4 -j ACCEPT
ip6tables -I DOCKER-USER -i enp0s4 -o wg0 -j ACCEPT

EDIT: Here the very succinct list of differences, since this post became rather long and the ticket is mainly about tracking differences:

  • docker-ipv6nat creates an ip6tables FORWARD chain with a default policy of ACCEPT, while the current experimental implementation sets it to DROP
  • docker-ipv6nat creates an additional chain DOCKER-USER hooked into FORWARD, while the current experimental implementation does not

RaphMad avatar Aug 03 '21 12:08 RaphMad

Does this feature work with Docker Desktop v20.10.14 for Mac? I'm unable to connect to ipv6 hosts or ping it from the inside of the container, even if I put

  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "experimental": true,
  "ip6tables": true

to the config

romansavrulin avatar Jun 15 '22 15:06 romansavrulin

Does this feature work with Docker Desktop v20.10.14 for Mac?

I don't think it will. Docker for Mac runs in a virtual machine (xhyve), not directly in macOS.

robbertkl avatar Jun 15 '22 18:06 robbertkl

Something I noticed: If you use a ULA prefix for fixed-cidr-v6 like fd00::/80, everything inside your container will still prefer IPv4 over IPv6 unless you force it to use IPv6. For example if you ping or curl (without the -6 flag) dual stack hosts, it will talk to them via IPv4. Kinda a dealbreaker for me.

I guess the OS is smart and knows that a ULA address isn't supposed to be able to talk to a global address and therefore doesn't even try to in the first place.

So then I tried it with the designated documentation prefix 2001:db8::/32 which technically isn't a ULA prefix but also not globally routed. And it did fix the problem. 🎉 I don't know whether this is a bad idea, but I don't see how this should hurt anything if it's behind a NAT anyway.

A1bi avatar Oct 28 '22 22:10 A1bi

@A1bi That explains for me what is going on with #78

guysoft avatar Oct 28 '22 23:10 guysoft