Add support for IPv6
As documented at https://minikube.sigs.k8s.io/docs/contrib/roadmap/
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Downgrading to backlog since no one has touched this -- but I do feel strongly about us supporting this use case.
@tstromberg Was planning on starting this, if it's ok with you? Thanks
For me, a mere user, IPv6 support is much more important than for example multi-node or other more advanced K8s features.
I use minikube on Linux to quickly spin up fresh clusters for integration testing.
Some of the tested applications are intended and configured for IPv6-only clusters. Being forced to fall back to IPv4 in the test setup would thus reduce coverage or risk introducing errors when modifying the configuration.
@vishjain - please do!
As someone looking at deployed an IPv6 only cluster, I would really love to see support within minikube for it.
With the release of K8s 1.21, would this also include support for dual-stack environments? If so, would love to see this land! :-D
I would accept any PR that would add this feature
@UXabre @lyind
atleast ipv6 support for minikube with docker driver should be easy to achieve
What I finally did was automatically setting up a single-node kubeadm cluster with CNI inside a linux KVM guest. Using that for tests/CI now.
So it is definitely possible at least on Linux with KVM.
Unfortunately other issues prevented me from implementing this in minikube as I needed a fast solution covering all of them quickly.
Can't test kube-proxy behavior around DualStack services, and applications using those services, in minikube without this support.
I'm looking to recreate a ipv6 bug locally for an application that does have to run in ipv6 clusters...and was hoping to use minikube. Guess I'm out of luck until this gets implemented
anyone would could contribute this feature, I could review this
@medyagh how/where would one need to implement? Maybe I can give it a try.
I did some templates to deploy in LXD containers (or vm's for testing ceph) with ipv6-only in https://github.com/gattytto/lxd_kube if it's of any use. Calico.yaml is old in that repo btw it's cri-o and snap packages I tried to keep it canonical to fully recreate use radvd for an ipv6 range that starts with 2001 like 2001:470:1b1e:F000::/48 and an openvswitch bridge so all ct's/vms get SLAAC from there and then goes the readme in that repo