kubectl
kubectl copied to clipboard
Add a flag to `kubectl top` to include cpu/memory resource request/limits for pods and containers
What would you like to be added: An additional command line flag that, when enabled, adds the resource request and limits defined in the container manifest.
$ kubectl top pods --enumerate # or some other flag like '-o wide'
NAME CPU(cores) MEMORY(bytes) CPU REQ(cores) MEMORY REQ(bytes) CPU LIMIT(cores) MEMORY LIMIT(bytes)
my-pod-1 15m 10Mi 10m 5Mi 30m 15Mi
my-pod-2 15m 10Mi 10m 5Mi 30m 15Mi
my-pod-3 3m 5Mi 10m 5Mi 30m 15Mi
This flag should also be compatible with --containers and specifying a pod name.
Why is this needed:
I mostly operate in an on-prem cluster with our own hardware, and we don't utilize horizontal scaling for a handful of our deployments just due to the nature of their design. I.e. we find ourselves restricted for resources more often than not. Adding this flag to show the configured resource requests and limits would save a ton of time bouncing back and forth between kubectl top pods and kubectl get pod ... to compare the two.
@ethanchowell Check out the resource-capacity kubectl plugin that looks like it might do what you want.
$ kubectl resource-capacity --util
NODE CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL
* 1250m (10%) 300m (2%) 254m (2%) 590Mi (3%) 490Mi (3%) 4978Mi (32%)
k8s-master 850m (21%) 100m (2%) 180m (4%) 220Mi (2%) 220Mi (2%) 2362Mi (30%)
k8s-worker-1 200m (5%) 100m (2%) 44m (1%) 250Mi (6%) 50Mi (1%) 1568Mi (41%)
k8s-worker-2 200m (5%) 100m (2%) 31m (0%) 120Mi (3%) 220Mi (6%) 1049Mi (28%)
/triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I was looking at kubectl top flags when found this issue. It will be quite helpful. Is it still work in progress