kubectl
kubectl copied to clipboard
proposal: pkg/cmd/config: introduce alias management
What would you like to be added
Extend the kubectl config
command with new subcommands to handle aliases:
% kubectl config
...
current-context Display the current-context
+ delete-alias Delete the specified alias from the kubeconfig
delete-cluster Delete the specified cluster from the kubeconfig
delete-context Delete the specified context from the kubeconfig
delete-user Delete the specified user from the kubeconfig
+ get-aliases Display aliases defined in the kubeconfig
get-clusters Display clusters defined in the kubeconfig
get-contexts Describe one or many contexts
get-users Display users defined in the kubeconfig
+ rename-alias Rename a alias from the kubeconfig file
rename-context Rename a context from the kubeconfig file
set Set an individual value in a kubeconfig file
+ set-alias Set a alias entry in kubeconfig
set-cluster Set a cluster entry in kubeconfig
set-context Set a context entry in kubeconfig
set-credentials Set a user entry in kubeconfig
unset Unset an individual value in a kubeconfig file
use-context Set the current-context in a kubeconfig file
view Display merged kubeconfig settings or a specified kubeconfig file
...
# ~/.kube/config
apiVersion: v1
clusters: {}
contexts: {}
current-context: foo
kind: Config
preferences:
+ aliases: {}
users: {}
Why This Matters
The current method of setting up aliases using the alias
utility in POSIX shells is inconsistent and cumbersome, relying on external commands and resulting in scattered definitions. To address this, we're introducing 4 config
subcommands for kubeconfig aliases. This approach, inspired by the simplicity of Git, ensures a seamless experience regardless of the chosen shell.
In Git, the process looks like this:
git config --global alias.ci commit
# To rename, you delete and create again
git config --get-regexp ^alias
git config --global --unset alias.ci
The proposal aligns with this simplicity, providing a standardized and easy method for users to create, rename, list, and delete aliases with kubectl:
kubectl config set-alias
kubectl config rename-alias
kubectl config get-aliases
kubectl config delete-alias
Draft subcommands
Subcommand set-alias
kubectl config set-alias [options]
For instance:
% kubectl config set-alias ga 'get all'
Alias "ga" created.
% kubectl config set-alias gp 'get pods'
Alias "gp" created.
% kubectl config set-alias dp 'delete pods'
Alias "dp" created.
% kubectl config set-alias ex 'explain'
Alias "ex" created.
Subcommand get-aliases
kubectl config get-aliases [options]
For instance:
% kubectl config get-aliases
NAME COMMAND
ga get all
gp get pods
dp delete pods
ex explain
Subcommand delete-alias
kubectl config delete-alias [options]
For instance:
% kubectl config delete-alias ga
Alias "ga" deleted.
Subcommand rename-alias
kubectl config rename-alias [options]
For instance:
% kubectl config rename-alias ga foo
Alias "ga" renamed to "foo".
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/sig cli
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten