cloud-provider-aws icon indicating copy to clipboard operation
cloud-provider-aws copied to clipboard

Dual-stack support for Service type-loadBalancer NLB

Open mtulio opened this issue 5 months ago • 12 comments

What would you like to be added:

Support for dual-stack Network Load Balancers (NLB) within the Kubernetes Cloud Controller Manager (CCM) for AWS. This enhancement would enable the provisioning of NLBs with both IPv4 and IPv6 addresses (dual-stack) for Kubernetes Services of type LoadBalancer.

The AWS Load Balancer Controller (LBC) provides an annotation for IP Address Type alb.ingress.kubernetes.io/ip-address-type with the valid values: ipv4 or dualstack. It would be nice to have similar experience.

Why is this needed:

AWS officially announced IPv6 support for Network Load Balancers (NLB) in 2020, as highlighted in the provided announcement. This capability is increasingly vital for modern cloud-native deployments that require dual-stack networking, allowing services to be accessible via both IPv4 and IPv6 addresses. https://www.amazonaws.cn/en/new/2020/network-load-balancer-now-supports-ipv6/

There has been a consistent demand and prior efforts from the community to integrate this functionality, as evidenced by existing issues (#477, #243) and a past pull request (#497). Implementing dual-stack NLB support in cloud-provider-aws would align Kubernetes' load balancer provisioning with AWS's native features, offering a comprehensive solution for users aiming to deploy IPv6-enabled services. This enhancement would significantly improve network flexibility, and facilitate adherence to IPv6 mandates, enabling Kubernetes services to fully leverage AWS's dual-stack networking capabilities.

/kind feature

mtulio avatar Jul 29 '25 22:07 mtulio

This issue is currently awaiting triage.

If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 29 '25 22:07 k8s-ci-robot

@mtulio, I have spoken with @elmiko about showing love in delivering the addition of dual-stack NLB support in the Kubernetes Cloud Controller Manager (CCM) for AWS, and I am looking at how I can be of help in delivering this feature.

jimohabdol avatar Sep 12 '25 18:09 jimohabdol

Hey @jimohabdol - That's great to hear from you, thanks for pinging!

Would you mind sharing your thoughts how this feature could be? For example I still have some open question, specially to fully understanding how ALBC implements it (if so, would be nice to CCM follow similar pattern):

  • Do you have an specific requirement for this scenario?
  • Are you planning to use ip-address-type as ipv6 on target groups (backends) or just dualstack on NLBs (frontend)?
  • Do you use mixed subnets ("public"/LB, "private"/nodes) in your environment (e.g.: single-stack IPv4 or IPv6, dual-stack)?
  • How do you think the user would interact with this feature? (through annotations, config, etc)

cc @nrb (my colleague that would be interested in this discussion)

mtulio avatar Sep 12 '25 20:09 mtulio

  • Do you have an specific requirement for this scenario?
  • Are you planning to use ip-address-type as ipv6 on target groups (backends) or just dualstack on NLBs (frontend)?
  • Do you use mixed subnets ("public"/LB, "private"/nodes) in your environment (e.g.: single-stack IPv4 or IPv6, dual-stack)?
  • How do you think the user would interact with this feature? (through annotations, config, etc)

Do you have an specific requirement for this scenario?

  • Not really.

Are you planning to use ip-address-type as ipv6 on target groups (backends) or just dualstack on NLBs (frontend)?

  • I will prefer to use dualstack on NLBs Client (IPv4/IPv6) -> NLB (Dual-stack) -> Target Group (Instance) -> Pod (IPv4)

Do you use mixed subnets ("public"/LB, "private"/nodes) in your environment (e.g.: single-stack IPv4 or IPv6, dual-stack)?

  • Mixed subnet

How do you think the user would interact with this feature? (through annotations, config, etc)

  • Interacting through annotations is more user friendly

jimohabdol avatar Sep 14 '25 16:09 jimohabdol

cc. @nrb

damdo avatar Sep 15 '25 14:09 damdo

@mtulio @nrb I’m just following up to see if you’ve had a chance to review my feedback.

jimohabdol avatar Sep 18 '25 08:09 jimohabdol

@jimohabdol You mention having mixed subnets. I'm interpreting that to mean you have both IPv4 and IPv6 subnets associated with the load balancer. Based on the desired flow, it looks like the IPv6 subnets are only on the public side and the IPv4 subnets are only on the private side. Is that an accurate statement? Put another way, the only requirement for IPv6 at all would be at the cluster ingress.

How are you handling dual stack in your environments right now, if at all?

nrb avatar Sep 18 '25 18:09 nrb

@nrb, your interpretation of the mixed subnet is correct. The Dual-Stack Only for Load Balancer Subnets and the Target groups remain ipv4. IMO, the ipv6 is not required for the cluster.

jimohabdol avatar Sep 19 '25 15:09 jimohabdol

@nrb, your interpretation of the mixed subnet is correct. The Dual-Stack Only for Load Balancer Subnets and the Target groups remain ipv4. IMO, the ipv6 is not required for the cluster.

@nrb @mtulio @damdo @elmiko, how are we moving forward with this issue?

jimohabdol avatar Oct 04 '25 15:10 jimohabdol

@nrb and @mtulio are working on this behind the scenes. They'll probably report back soon. Thanks @jimohabdol

damdo avatar Oct 24 '25 08:10 damdo

Appreciate the update, @damdo.

jimohabdol avatar Oct 24 '25 10:10 jimohabdol

@mtulio @jimohabdol Looking through how the ALBC handles this, they use annotations namespaced to kubernetes.io, as seen in this list.

Given that these are not namespaced to ALBC, I'm leaning towards re-using the annotations. We don't want to support all of them; users are better served using ALBC if they want all customization options. To begin with, supporting service.beta.kubernetes.io/aws-load-balancer-ip-address-type with values of ipv4 or dualstack would be the bare minimum. All other resources and attributes would be auto-created and labeled by the CCM.

Do we need others at the moment? Should we be flexible enough to support some of the routing annotations for the internal side? The internal IPv6 support could be omitted for now, given our use cases.

nrb avatar Oct 24 '25 20:10 nrb