Dual-stack support for Service type-loadBalancer NLB
What would you like to be added:
Support for dual-stack Network Load Balancers (NLB) within the Kubernetes Cloud Controller Manager (CCM) for AWS. This enhancement would enable the provisioning of NLBs with both IPv4 and IPv6 addresses (dual-stack) for Kubernetes Services of type LoadBalancer.
The AWS Load Balancer Controller (LBC) provides an annotation for IP Address Type alb.ingress.kubernetes.io/ip-address-type with the valid values: ipv4 or dualstack. It would be nice to have similar experience.
Why is this needed:
AWS officially announced IPv6 support for Network Load Balancers (NLB) in 2020, as highlighted in the provided announcement. This capability is increasingly vital for modern cloud-native deployments that require dual-stack networking, allowing services to be accessible via both IPv4 and IPv6 addresses. https://www.amazonaws.cn/en/new/2020/network-load-balancer-now-supports-ipv6/
There has been a consistent demand and prior efforts from the community to integrate this functionality, as evidenced by existing issues (#477, #243) and a past pull request (#497). Implementing dual-stack NLB support in cloud-provider-aws would align Kubernetes' load balancer provisioning with AWS's native features, offering a comprehensive solution for users aiming to deploy IPv6-enabled services. This enhancement would significantly improve network flexibility, and facilitate adherence to IPv6 mandates, enabling Kubernetes services to fully leverage AWS's dual-stack networking capabilities.
/kind feature
This issue is currently awaiting triage.
If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@mtulio, I have spoken with @elmiko about showing love in delivering the addition of dual-stack NLB support in the Kubernetes Cloud Controller Manager (CCM) for AWS, and I am looking at how I can be of help in delivering this feature.
Hey @jimohabdol - That's great to hear from you, thanks for pinging!
Would you mind sharing your thoughts how this feature could be? For example I still have some open question, specially to fully understanding how ALBC implements it (if so, would be nice to CCM follow similar pattern):
- Do you have an specific requirement for this scenario?
- Are you planning to use
ip-address-typeasipv6on target groups (backends) or justdualstackon NLBs (frontend)? - Do you use mixed subnets ("public"/LB, "private"/nodes) in your environment (e.g.: single-stack IPv4 or IPv6, dual-stack)?
- How do you think the user would interact with this feature? (through annotations, config, etc)
cc @nrb (my colleague that would be interested in this discussion)
- Do you have an specific requirement for this scenario?
- Are you planning to use
ip-address-typeasipv6on target groups (backends) or justdualstackon NLBs (frontend)?- Do you use mixed subnets ("public"/LB, "private"/nodes) in your environment (e.g.: single-stack IPv4 or IPv6, dual-stack)?
- How do you think the user would interact with this feature? (through annotations, config, etc)
Do you have an specific requirement for this scenario?
- Not really.
Are you planning to use ip-address-type as ipv6 on target groups (backends) or just dualstack on NLBs (frontend)?
- I will prefer to use dualstack on NLBs Client (IPv4/IPv6) -> NLB (Dual-stack) -> Target Group (Instance) -> Pod (IPv4)
Do you use mixed subnets ("public"/LB, "private"/nodes) in your environment (e.g.: single-stack IPv4 or IPv6, dual-stack)?
- Mixed subnet
How do you think the user would interact with this feature? (through annotations, config, etc)
- Interacting through annotations is more user friendly
cc. @nrb
@mtulio @nrb I’m just following up to see if you’ve had a chance to review my feedback.
@jimohabdol You mention having mixed subnets. I'm interpreting that to mean you have both IPv4 and IPv6 subnets associated with the load balancer. Based on the desired flow, it looks like the IPv6 subnets are only on the public side and the IPv4 subnets are only on the private side. Is that an accurate statement? Put another way, the only requirement for IPv6 at all would be at the cluster ingress.
How are you handling dual stack in your environments right now, if at all?
@nrb, your interpretation of the mixed subnet is correct. The Dual-Stack Only for Load Balancer Subnets and the Target groups remain ipv4. IMO, the ipv6 is not required for the cluster.
@nrb, your interpretation of the mixed subnet is correct. The Dual-Stack Only for Load Balancer Subnets and the Target groups remain ipv4. IMO, the ipv6 is not required for the cluster.
@nrb @mtulio @damdo @elmiko, how are we moving forward with this issue?
@nrb and @mtulio are working on this behind the scenes. They'll probably report back soon. Thanks @jimohabdol
Appreciate the update, @damdo.
@mtulio @jimohabdol Looking through how the ALBC handles this, they use annotations namespaced to kubernetes.io, as seen in this list.
Given that these are not namespaced to ALBC, I'm leaning towards re-using the annotations. We don't want to support all of them; users are better served using ALBC if they want all customization options. To begin with, supporting service.beta.kubernetes.io/aws-load-balancer-ip-address-type with values of ipv4 or dualstack would be the bare minimum. All other resources and attributes would be auto-created and labeled by the CCM.
Do we need others at the moment? Should we be flexible enough to support some of the routing annotations for the internal side? The internal IPv6 support could be omitted for now, given our use cases.