Karpenter not scaling nodes when ResourceQuota in namespace is fully utilized.
Description
Observed Behavior:
Karpenter does not scale nodes even when the ResourceQuota defined in a namespace has been fully utilized. This leads to pods remaining unscheduled due to lack of available resources.
Expected Behavior:
Karpenter should automatically scale nodes to meet the additional resource demands when the ResourceQuota in a namespace is fully utilized, allowing new pods to be scheduled.
Reproduction Steps (Please include YAML):
- Define a ResourceQuota in a namespace with specific resource limits (CPU, memory, etc.).
- Deploy a workload that consumes the ResourceQuota completely.
- Observe that Karpenter does not initiate node scaling even when the quota is fully consumed.
apiVersion: v1 kind: ResourceQuota metadata: name: example-quota namespace: example-namespace spec: hard: requests.cpu: "2" requests.memory: 1Gi limits.cpu: "4" limits.memory: 2Gi
Versions:
- karpenter Version: v0.29.0
- Kubernetes Version (
kubectl version): 1.28
I am not sure if I completely understand. From what I know about ResourceQuota, it is meant for limiting the quantity of objects that can be created in a namespace. If that limit is reached within a namespace for which there is a resouceQuota, there will not be any additional provisioning of nodes, unless new pods are created in a different namespace where the resource quota hasn't been completely used.
Suppose the example namespace is fully utilizing its resource limits (500 millicores of CPU and 10 GB of memory). At this point, no more pods can be scheduled in the example namespace due to the resource constraints set by the limits and requests, even though the worker node still has available resources (1.5 cores of CPU and 6 GB of memory). Karpenter does not scale out additional nodes because it only considers the overall node resource usage, not the namespace-specific resource quotas. Therefore, it sees the existing node as having sufficient resources and does not trigger the creation of new nodes.
I want to ensure that Karpenter will add new nodes when necessary to allow further pod scheduling within the namespace, even if the existing nodes are not fully utilized.
Karpenter reacts in response to pending pods. If the pods that can be scheduled to a namespace are restricted by the resource quota, Karpenter will not consider them for provisioning given the resource quota is already exhausted. It's the resource quota that's preventing the pods from being considered for scheduling. Unless I have misunderstood what you have said, this should be the expected behavior.
Thanks for your reply @jigisha620
ResourceQuota imposes a restriction that prevents Karpenter from scaling nodes if any namespace's quota is fully utilized. How can I configure Karpenter to scale nodes even when a namespace has reached its ResourceQuota limit?
How can i supersedes such restriction from resource quota when we are using karpenter? The use case which we have is, there are multiple namespace in our physical cluster and resource quota is applied to each of the namespace(with request and limits parameters). when any of the namespace is fully utilized in terms of cpu and memory, karpenter doesn't scale/add new node.
We understand that karpenter is giving priority to resource quota configuration, however we need some configuration/policy/rules in karpenter which will overrides the resource quota configuration and allow to scale the node if any of the namespace is full.
If at all karpenter doesnt suffice our use-case, how can we scale the cluster if any of the namespace is fully utilized.
kindly suggest!