kubectl-allctx icon indicating copy to clipboard operation
kubectl-allctx copied to clipboard

Redesign ideas for CLI

Open ahmetb opened this issue 3 years ago • 2 comments

Assuming some day you are planning to rework this tool, providing some ideas.

I think rewriting this tool in Go would be immensely useful. I'm, for example, running this tool on massively large clusters and many commands take >30s to return sometimes, and I run grep on these. Parallelization would greatly benefit.

CLI design

I think some form of kubectl allctx [OPTIONS] CONTEXT_SPEC[...CONTEXT_SPEC] -- [KUBECTL_ARGS...] should be useful.

The -- separation would disambiguate and future-proof allctx for future expansion with new args/subcmds and not them be confused with args to kubectl.

Hypothetical examples:

# all contexts:
kubectl allctx -- get nodes     
# some contexts
kubectl allctx c1 c2 c3 -- get nodes
# pattern matching
kubectl allctx ~'.*-prod$' -- get nodes

Output design

I typically grep things for output, e.g. if I wanted to find unschedulable nodes across prod clusters I'd want to run something like kubectl allctx ~'.*-prod$' -- get nodes | grep SchedulingDisabled . It's important in the grep output to be able to see which cluster the node belongs to.

So I will recommend prefixing all outputs with context name, e.g.:

$ kubectl allctx c1 c2 -- get nodes
c1 | NAME                               STATUS                        ROLES    AGE     VERSION
c1 | x-ayx-36-sr1.prod.x.com   Ready                         <none>   68d     v1.20.5
c1 | x-bpk-27-sr1.prod.x.com   Ready                         <none>   63d     v1.20.5
c2 | NAME                               STATUS                        ROLES    AGE     VERSION
c2 | x-ayx-22-sr1.prod.x.com   Ready                         <none>   68d     v1.20.5
c2 | x-bpk-11-sr1.prod.x.com   Ready                         <none>   63d     v1.20.5

This is probably good enough to begin with.

Behavioral changes

A rewrite in go can enable running all kubectl commands in parallel.

However, some users might want to see output lines from the same context next to each other. This can be the default behavior, and you'd buffer outputs and show them once they're fully ready. You can even show status of what contexts are currently being processed vs completed somewhere in stderr like:

Still running: 9 of 12 contexts completed.

Some users will want to see output as they arrive (prefixed by context as described above) as paging over long lists can take very long. For example, running kubectl allctx -- logs --label foo=bar --follow is not something you can buffer as the logs command will run indefinitely, and in that case, you can still output the lines as they arrive (prefixed by context name, described above). e.g.:

$ kubectl allctx -- logs --label foo=bar --follow
c1| message from pod1 in c1
c2| message from pod1 in c2
c1| message from pod5 in c1
c2| message from pod3 in c2
...

In this case, you need to be careful about preserving a lock over printing lines, otherwise the output lines may interleave and produce corrupted/undesired output.

You can expose these with options like --output-by-context, or --output-live like options to allctx.

The above cmd won't work as kubectl logs only works for a single Pod. But there's a plugin "kubectl tail", which brings me to my next topic.

Kubectl plugin support

There are lots of good kubectl plugins out there, for example kubectl images lists images in a cluster. I should be able to use it via allctx, and allctx would pass --context=... to such plugins to make them work per-context.

However, not all plugins accept --context at the end, so you might need to do what GNU xargs command does.

ahmetb avatar Mar 31 '22 19:03 ahmetb

Another idea with making concurrent/parallel execution the default: You can introduce a --timeout= parameter for contexts that do not finish timely and print a warning to stderr to omit their output. Similarly, you can print warnings for clusters whose commands return a non-zero exit code.

ahmetb avatar Mar 31 '22 22:03 ahmetb

Thank you @ahmetb for all the precious input. As we discussed offline, it makes much more sense to transfer the ownership to you.

onatm avatar Aug 25 '22 15:08 onatm