kubectl
kubectl copied to clipboard
Command default config rules for kubectl (perhaps in ~/.kube/config)
/kind feature /sig cli
Kubernetes version (use kubectl version
):
All
Environment:
- Cloud provider or hardware configuration: N/A
- OS (e.g. from /etc/os-release): N/A
-
Kernel (e.g.
uname -a
): N/A - Install tools: kubectl
- Others:
What happened:
We would like to have rules for kubectl that allow you to specify default configs for certain commands
For example:
kubectl get pods
The default for --show-all
is true
What you expected to happen:
But what if want to run the same but with the default --show-all=false
without specifying it like this:
kubectl get pods --show-all=false
. I just want to run it like this
kubectl get pods
How to reproduce it (as minimally and precisely as possible):
kubectl get pods --show-all=false
Anything else we need to know:
It would be nice to have a generic structure in the config for all the default config options.
Some background here
The kubectl get pods
example is particularly egregious for me, the default for --show-all
is opposite my expectation about 98% of the time, and it's particularly challenging that there is no shorter way to do the --show-all=false
inversion because the shorthand -a
causes no change from the default behavior.
In fact, in pointing it out, it is rather odd that the -a
parameter even exists, I'm assuming its an historical artifact from when the default was the other way around, which is what I remember from when I first started using kubectl
years ago.
/area kubectl /priority P2
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
On Tue, Dec 25, 2018 at 6:44 AM fejta-bot [email protected] wrote:
Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubectl/issues/537#issuecomment-449782971, or mute the thread https://github.com/notifications/unsubscribe-auth/APXA0JLT3Qh__IsCDmBvgTw_pPYjjnGGks5u8XvugaJpZM4W3s0T .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Really want to try and tackle this one this year. There's going to be an increased need for user defined defaults.
cc @dougsland
/lifecycle frozen /remove-priority P2 /priority backlog
Really want to try and tackle this one this year. There's going to be an increased need for user defined defaults.
cc @dougsland
/lifecycle frozen /remove-priority P2 /priority backlog
Nice @eddiezane - Sounds a good plan.
/assign @eddiezane /cc @dougsland
/assign @dougsland