kubectl icon indicating copy to clipboard operation
kubectl copied to clipboard

Proposal (cmd/describe): Denote list elements in describe output

Open ahmetb opened this issue 1 year ago • 6 comments

What would you like to be added:

I would like to see list elements in kubectl describe denoted in a way that the readers have easier time distinguishing each element in the list easily.

Why is this needed:

Consider the following kubectl describe output excerpts from various resources I observed in the wild. In my opinion they all suffer from poor experience while reading the output. The situation

Status.Conditions (from a custom resource)

Status:
  Conditions:
    Last Transition Time:  2023-11-29T22:57:45Z
    Message:
    Reason:                InitialCondition
    Status:                False
    Type:                  Drained
    Last Transition Time:  2023-11-29T22:57:45Z
    Message:
    Reason:                InitialCondition
    Status:                False
    Type:                  PendingDeallocation
    Last Transition Time:  2023-11-29T22:57:45Z
    Message:               Node has no scheduled maintenance disruptions
    Reason:                NoDisruptionsScheduled
    Status:                False
    Type:                  PendingMaintenance
    Last Transition Time:  2023-12-06T23:26:47Z
    Message:
    Reason:                InitialCondition
    Status:                Unknown
    Type:                  Converged

Kruise CloneSet (custom resource) volumes field

Volumes:
  Empty Dir:
  Name:  jvm-tmp
  Empty Dir:
  Name:  app-conf-private
  Empty Dir:
  Name:  libexec
  Host Path:
    Path:  /etc/x
    Type:  Directory
  Name:    riddler-certs
  Host Path:
    Path:  x
    Type:  File
  Empty Dir:
  Name:  app-config
  Name:    x
  Host Path:
    Path:  x
    Type:  File
  Name:    cm-json
  Host Path:
    Path:  x
    Type:  Socket
  Name:    x
  Host Path:
    Path:  /export
    Type:  DirectoryOrCreate
  Name:    data-pack

The situation is less severe on builtin resources that have a "key" for the lists (e.g. name field):

Pod

Volumes:
  kube-api-access-tb6fd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  x-app-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  xpki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/xpki
    HostPathType:  Directory
  pki-agent-socket:
    Type:          HostPath (bare host directory volume)
    Path:          x
    HostPathType:  Socket

Proposal

I think for the first example, indicating list elements sounds cleaner and more preferable:

Status:
  Conditions:
  * Last Transition Time:  2023-11-29T22:57:45Z
    Message:
    Reason:                InitialCondition
    Status:                False
    Type:                  Drained
 *  Last Transition Time:  2023-11-29T22:57:45Z
    Message:
    Reason:                InitialCondition
    Status:                False
    Type:                  PendingDeallocation
 *  Last Transition Time:  2023-11-29T22:57:45Z
    Message:               Node has no scheduled maintenance disruptions
    Reason:                NoDisruptionsScheduled
    Status:                False
    Type:                  PendingMaintenance
 *  Last Transition Time:  2023-12-06T23:26:47Z
    Message:
    Reason:                InitialCondition
    Status:                Unknown
    Type:                  Converged

If it was totally up to me, I'd go with something more unhinged like this:

Status:
├── Conditions:
│   ├── Last Transition Time:  2023-11-29T22:57:45Z
│   │   Message:
│   │   Reason:                InitialCondition
│   │   Status:                False
│   │   Type:                  Drained
│   ├── Last Transition Time:  2023-11-29T22:57:45Z
│   │   Message:
│   │   Reason:                InitialCondition
│   │   Status:                False
│   │   Type:                  PendingDeallocation
│   └── Last Transition Time:  2023-11-29T22:57:45Z
│       Message:               Node has no scheduled maintenance disruptions
│       Reason:                NoDisruptionsScheduled
│       Status:                False
│       Type:                  PendingMaintenance
└── Last Transition Time:  2023-12-06T23:26:47Z
    └── Message:
        Reason:                InitialCondition
        Status:                Unknown
        Type:                  Converged

/sig cli /area kubectl

ahmetb avatar Dec 08 '23 04:12 ahmetb

This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 08 '23 04:12 k8s-ci-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 07 '24 05:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 06 '24 05:04 k8s-triage-robot