weave
weave copied to clipboard
support IPv6
Weave currently only works over IPv4. Main areas that need attention in order to support IPv6 are
- PMTU discovery, which currently relies on icmp3.4 detection & injection, which is IPv4 specific.
- Fragmentation, i.e. when weave performs fragmentation because it cannot trust the stack to do it. This is IPv4 specific. Though we may not need it at all for IPv6, since in IPv6 all fragmentation is supposed to happen at the source, based on the PMTU.
- Various overhead calculations, e.g. udp header size. These are IPv4 specific.
- Peer connections. These currently use udp4/tcp4 addressing.
- Bridge, interface, firewall and docker port forwarding configuration, i.e. all the stuff we do in the 'weave' script. These might just work when given IPv6 addresses, but we can't be certain at this stage.
+1 this would be great to enable many isolated networks for multi-tenancy
IPv6 is NOT A FEATURE.
There is a great deal of risk in designing network virtualization applications that use IPv4 in the beginning, because you will quickly run into problems where IPv4-isms have been "baked in" and it will be difficult to fix when transitioning to IPv6. I have a bit of experience with this, working on the OpenStack Neutron project, where APIs were thought to be IPv6 capable (Hey, there's an ip_version attribute you can set to 6!) but were not expressive enough to convey important networking primitives (DHCPv6? Stateless Auto-Configuration? etc.)
any news ?
Hi @vtolstov I am afraid we have no plans to work on IPv6 support right now - when we do, you'll see this issue move from the icebox to an actual milestone.
It seems to me there can be two IPv6 requirements:
- IPv4 overlay networking encapsulated over an IPv6 underlay
- IPv6 overlay networking encapsulated over an IPv4 underlay
and you can also wish for both together.
Most of the points in the original description relate to 1, however:
- 3rd point: all references to IP header size are for the overlay, however IP headers also play a part in fragmentation calculations, covered in point 2.
- 5th point: the only time we use an iptables rule with an IP address is for
weave expose, and the IP address is from the overlay.
I'm using weave net in a single-node k8s cluster and the server has both ipv4 and ipv6. When I launch a docker container on a port and thus bypass k8s, it is both accessible at http://IPV4:PORT and http://IPV6:PORT. However, when I try reaching a k8 nodePort service at http://IPV6:ANOTHER_PORT, the same docker container is no longer reachable. As a non-export in networking stuff, I've been thinking that as long as the nodes are reachable via ipv6, all should work because of some kind of "bridging" of ipv6 to ipv4 when traffic gets into k8s.
Am I right that accessing k8s workload via node's external ipv6 addresses will only be possible when weave net gets ipv6 support? Or is the issues somewhere in kubeproxy or another k8s component? I'm using the k8s 1.8.4 and a corresponding version of weave net in it.
@kachkaev Weave Net currently only assigns an ipv4 address inside the pod, and the NAT schemes used to expose services do not work cross-protocol. I expect Docker is assigning an ipv6 address to the container and mapping that out.
Many parts of Kubernetes need updating to support ipv6. Pure-ipv6 is nearly done (e.g. see here), but dual-stack (ipv4 and ipv6 in parallel) needs more work.
There's no particular reason to use Weave Net on a single-node cluster; the CNI bridge plugin would work just as well (and supports ipv6).
Actually in my case I only need a local interface with ipv6 (not external communication) it's for localhost usage. Is it possible? I can do it with Docker:
docker run --rm -it busybox sysctl -a | grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
docker run --rm -it busybox ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
But same image with kubernetes and weave plugin
kubectl exec -ti busybox-sleep -- sysctl -a | grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
kubectl exec -ti busybox-sleep -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
cat /etc/cni/net.d/10-weave.conflist
"cniVersion": "0.3.0",
weave image
docker.io/weaveworks/weave-kube:2.5.1
Thanks
Hello, I tested the Weave solution with Kubernetes and I find it super simple and functional. However, looking at this ticket, I see that IPv6 is not implemented and it disappointed me a lot because I was very interested in your solution. Is it planned to support the IPv6 stack or should I look elsewhere? Thank you for your clarification.
@duylong no work in progress, no.
Could you clarify which of the options at https://github.com/weaveworks/weave/issues/19#issuecomment-268481561 you are interested in?
Hi Any ETA to support ipv6 pods? I used the Weave solution for a kubernetes cluster but not having support of ipv6 is frustrating. We need this because we develop the Jetty project and our CI is build on the top of Kubernetes agent but we can't test ipv6 networlk support.
anyone have a solution for supporting ipv6 in kubernetes maybe with another network plugin? thanks!
Hi, i suggest to have a look to Cilium or Calico. I am studing them at the moment.
@jcvizzi if you have any luck with those let me know :) Thanks!