zero-to-jupyterhub-k8s icon indicating copy to clipboard operation
zero-to-jupyterhub-k8s copied to clipboard

Exploratory documentation: managing maximum pods per node limitations

Open consideRatio opened this issue 5 years ago • 4 comments
trafficstars

When a Z2JH deployment grow, bigger and bigger nodes are typically used, but at some point these nodes will refuse to add pods to them because there is an upper limit on pods per node in all k8s clusters related to IP address range allocations. Since the limitation relates to IP address allocations. Since the limitation relates to IP addresses, the Kubernetes cluster's choice of a Container Network Interface (CNI) can sometimes influence the limitation.

This is exploratory documentation on the various mitigation strategies and the state of the issues related to various cloud providers. Perhaps it will find its way into the z2jh guide or other place at a later time when we overview the situation better.

GKE (Google)

GKE have a default maximum node limit of 110 pods, which can be lower (but not higher) depending on network IP range allocations.

Although having 110 Pods per node is a hard limit, you can reduce the number of Pods per node.

GKE references

  • https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr

EKS (Amazon)

On EKS, the nodes in the cluster will have different pod limits depending on their machine type.

Comment from @yuvipanda on Gitter: [...] but by default, EKS varies wildly based on what kind of node you are using. It's defined in here. This is a problem - if we use an m4.large node, I think it has enough RAM & CPU to run many hubs. But, it can hold only 28 pods! Between kube-system and hub pods, that's not nearly enough. So if you have two hubs (staging + prod), you end up needing 2 nodes instead of 1!

Comment from @yuvipanda on Gitter:

If you use Calico, you'll be easily able to get 100 pods per node, and based on performance up to several hundred no problems. But if you use a 'native' VPC solution like EKS' default, it's going to be more severely limited.

EKS references

  • https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html
  • https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

AKS (Microsoft)

Comment from @yuvipanda on Gitter:

On Azure, 'kubenet' (which isn't 'VPC native') gives you a lot more pods than the default (Azure CNI). You're gonna get slightly more network latency with kubenet, but totally worth it for all our use cases.

AKS references

  • https://docs.microsoft.com/en-us/azure/aks/configure-kubenet#ip-address-availability-and-exhaustion

consideRatio avatar Sep 30 '20 12:09 consideRatio

This is great, @consideRatio! I'll try provide some more context here.

Understanding Kubernetes's networking model helps a lot with understanding the 'max pod per node' limitation.

  1. All pods have their own IP that doesn't change during the lifetime of the pod.
  2. All pods on all node must be able to talk to all pods on any other node by being able to talk to the IP of the other pod, as if they were just on one giant flat network. This is what allows applications to run fairly unmodified in kubernetes - nothing kubernetes specific needed in the application.

So each time a pod is created, kubernetes must:

  1. give it a unique IP
  2. Make sure any pod on the cluster can talk to this new pod on its IP

Kubernetes uses a standard - the Container Network Interface (CNI) to accomplish these tasks. The cluster admin configures the Kubernetes cluster to use a particular CNI implementation - there are a million of them - based on the user needs, hardware in use, network topology or just their whims.

Cloud providers offer their own networking APIs that can do dynamic, interesting things. One common api they offer is the ability to provision an new network interface and then attach it to a running node. These network interfaces can then be assigned multiple IP addresses. You can attach many number of such network interfaces to a given node - and any traffic in the network to an IP address of any of those interfaces is automatically routed to the node! On AWS it is Elastic Network Interfaces, on GCE they're just Network Interfaces, etc. This is pretty fast, and CNI implementations often just piggy back on this functionality.

On AWS, if you use the default [AWS VPC CNI interface)(https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html), the following things happen when a new pod is created:

  1. A new Elastic Network Interface is created if needed.
  2. An IP is allocated from a pre-selected subnet, and assigned to the elastic network interface
  3. This interface is attached to the node the pod is running on. So all traffic to this IP will be automatically sent to this node
  4. Magic is performed inside the node to make sure that traffic coming to that particular IP goes to the correct pod.

This is very fast, efficient, and means other services in your VPC can talk to your pods very easily.

But, (possibly) creating a new network interface and IP for each pod has one very severe limitation - on EC2, only a limited number of network interfaces can be attached to any given node, and they can each have only a limited number of IPs! And this number is dependent on the size of the node. And since the number of pods depends on number of IPs and network interfaces that are attachable to the node, using the default CNI interfaces on EKS severly limits the maximum number of pods you can have on each node!

This file lists the maximum number of pods allowed in each node type. Let's take m5.large. It has 8G of RAM, and 2 CPUs. If I give each user pod a RAM guarantee of 128MB, I should be able to fit 62 user pods in there. But, and m5.large instance can only have 3 network interfaces, with 10 IPs per interface attached per instance. So this limits us to an absolute maximum of 30 IPs per node. One IP is for the node itself, leaving 29 for pods - which is the max number of pods allowed on an m5.large.

This is wasteful - even though you could fit 62 user pods, Kubernetes won't schedule them there after 29 pods, even though a lot of memory resource is available. Bad for cost. You could start using m5.xlarge instances, but they too allow only 58 pods - far less than the amount of user pods you can theoretically schedule on it.

So, if you're running a teaching hub on EKS, and want to pack user pods in as densely as possible, you must use a custon CNI plugin. Cost savings are worth it! EKS doesn't let you actively choose on creation, which sucks - but hopefully they'll fix it sometime!

Azure makes it slightly easier - you can choose which CNI plugin you want on cluster creation! Their Azure CNI plugin has the same problems mentioned here by default, although theoretically you can change that number (I haven't tried). kubenet plugin gives you 110 max pods per node, which is familiar to GKE users.

There are probably many wrong bits here, but hope this was useful :)

yuvipanda avatar Sep 30 '20 13:09 yuvipanda

I just enabled custom CNI for EKS on AWS, and now hub is not working.

[I 2022-03-30 21:14:35.970 JupyterHub app:2479] Running JupyterHub version 1.5.0
[I 2022-03-30 21:14:35.971 JupyterHub app:2509] Using Authenticator: jupyterhub.auth.DummyAuthenticator-1.5.0
[I 2022-03-30 21:14:35.971 JupyterHub app:2509] Using Spawner: kubespawner.spawner.KubeSpawner-1.1.0
[I 2022-03-30 21:14:35.971 JupyterHub app:2509] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-1.5.0
[I 2022-03-30 21:14:36.056 JupyterHub app:2546] Initialized 0 spawners in 0.003 seconds
[I 2022-03-30 21:14:36.058 JupyterHub app:2758] Not starting proxy
[W 2022-03-30 21:14:41.217 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[W 2022-03-30 21:14:44.924 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[W 2022-03-30 21:14:47.904 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[W 2022-03-30 21:14:51.928 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[W 2022-03-30 21:14:56.996 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[W 2022-03-30 21:15:01.010 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...

I probably need to add a necessary value to the helm chart I am guessing to fix the network confusions.. Is what I need to do documented somewhere, as I am a bit puzzled.

ilanpillemer avatar Mar 30 '22 21:03 ilanpillemer

@ilanpillemer 599 means 'network timeout', so means the hub isn't able to talk to the proxy pod. I've had 0 luck getting custom CNI to actually work in EKS - is inter-pod networking actually working? You can poke at it by exec-ing into the hub pod and trying to reach the proxy yourself.

yuvipanda avatar Mar 31 '22 20:03 yuvipanda

I managed to get it all working. My problem was in the ACL rules. Once I got those corrected JupyterHub spun up as expected, but using the secondary CIDR range and no longer exhausting the more limited primary CIDR. So I moved from 0 luck to success by the end of the week. :)

ilanpillemer avatar Apr 01 '22 00:04 ilanpillemer