amazon-vpc-cni-k8s
amazon-vpc-cni-k8s copied to clipboard
IPv6 egress with IPv4 Cluster/CNI
I'm trying to determine if this is actually feasible.
I have an AWS VPC configured with subnets that are "dual-stack" - they have IPv4 addresses and IPv6 addresses. There's an IPv6 egress gateway configured, DNS64 and NAT64. If it matters, this is the vpc configuration with Terraform
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = local.private_subnet_cidrs
public_subnets = local.public_subnet_cidrs
public_subnet_assign_ipv6_address_on_creation = true
private_subnet_assign_ipv6_address_on_creation = true
public_subnet_ipv6_prefixes = [0, 1, 2]
private_subnet_ipv6_prefixes = [3, 4, 5]
enable_nat_gateway = true
enable_ipv6 = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}
I can confirm my EKS nodes are getting IPv6 addressess on them.
I've set the CNI variable ENABLE_V6_EGRESS = "true"
I can confirm IPv6 connectivity outbound to the internet, it works great from the nodes
When the node joins, it doesn't get IPv6 forwarding enabled, so I created a daemonset to enable it, and verified that the node has ipv6 forwarding enabled
However, pods themselves have no outbound connectivity on IPv6. There's no route added to the pod, and the ENABLE_V6_EGRESS setting doesn't appear to have helped.
Is it possible to get a cluster where the CNI is obly IPv4 traffic to egress traffic this way, or am I way off base?