kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Add a flag to `kubectl top` to include cpu/memory resource request/limits for pods and containers

Open ethanchowell opened this issue 3 years ago • 15 comments

What would you like to be added: An additional command line flag that, when enabled, adds the resource request and limits defined in the container manifest.

$ kubectl top pods --enumerate # or some other flag like '-o wide'
NAME          CPU(cores)    MEMORY(bytes)    CPU REQ(cores)    MEMORY REQ(bytes)    CPU LIMIT(cores)    MEMORY LIMIT(bytes)
my-pod-1      15m           10Mi             10m               5Mi                  30m                 15Mi
my-pod-2      15m           10Mi             10m               5Mi                  30m                 15Mi
my-pod-3      3m            5Mi              10m               5Mi                  30m                 15Mi

This flag should also be compatible with --containers and specifying a pod name.

Why is this needed: I mostly operate in an on-prem cluster with our own hardware, and we don't utilize horizontal scaling for a handful of our deployments just due to the nature of their design. I.e. we find ourselves restricted for resources more often than not. Adding this flag to show the configured resource requests and limits would save a ton of time bouncing back and forth between kubectl top pods and kubectl get pod ... to compare the two.

ethanchowell avatar May 13 '22 02:05 ethanchowell

@ethanchowell Check out the resource-capacity kubectl plugin that looks like it might do what you want.

$ kubectl resource-capacity --util
NODE           CPU REQUESTS   CPU LIMITS   CPU UTIL    MEMORY REQUESTS   MEMORY LIMITS   MEMORY UTIL
*              1250m (10%)    300m (2%)    254m (2%)   590Mi (3%)        490Mi (3%)      4978Mi (32%)
k8s-master     850m (21%)     100m (2%)    180m (4%)   220Mi (2%)        220Mi (2%)      2362Mi (30%)
k8s-worker-1   200m (5%)      100m (2%)    44m (1%)    250Mi (6%)        50Mi (1%)       1568Mi (41%)
k8s-worker-2   200m (5%)      100m (2%)    31m (0%)    120Mi (3%)        220Mi (6%)      1049Mi (28%)

brianpursley avatar Jun 10 '22 19:06 brianpursley

/triage accepted

mpuckett159 avatar Jun 22 '22 21:06 mpuckett159

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 20 '22 21:09 k8s-triage-robot

/remove-lifecycle stale

mpuckett159 avatar Sep 21 '22 21:09 mpuckett159

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 20 '22 22:12 k8s-triage-robot

/remove-lifecycle stale

ethanchowell avatar Dec 22 '22 01:12 ethanchowell

I was looking at kubectl top flags when found this issue. It will be quite helpful. Is it still work in progress

Ritikaa96 avatar Jul 28 '23 06:07 Ritikaa96