kubectl
kubectl copied to clipboard
Finding local kubectl version either requires timeout or deprecated option (?)
What happened:
- I wanted to find out the version of kubectl I have installed, even though I have no selected context.
- I wanted to find out the version of kubectl I have installed, even though I have a current context that is offline.
Sample console sessions:
laptop:~$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.0
Kustomize Version: v4.5.4
Unable to connect to the server: dial tcp 203.0.113.42:443: connect: no route to host
$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.0
Kustomize Version: v4.5.4
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ kubectl version --client
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What you expected to happen: Something like:
kubectl versionshows the client version only.kubectl version --clientdoes not show any warning.
For example:
$ kubectl version
Client Version: v1.24.0
Kustomize Version: v4.5.4
$ kubectl version --client
Client Version: v1.24.0
Kustomize Version: v4.5.4
It's OK to have kubectl version --include-cluster-info or kubectl version --remote show the remote version too.
I would also be OK if kubectl version only checked the remote version when a current context is explicitly set (no fallback to an implicit server URL).
How to reproduce it (as minimally and precisely as possible):
- Visit https://kubernetes.io/docs/tasks/tools/#kubectl
- Install kubectl as instructed and, on a vanilla system, check the kubectl version.
Anything else we need to know?:
As a project, we need a way for people who are setting up a brand new kubectl to confirm that they have a working and current kubectl.
It's very helpful if this confirmation step doesn't require explaining that there is a warning: it's good practice not to have people become accustomed to seeing and then skipping warning messages.
Environment:
- Kubernetes client and server versions (use
kubectl version): v1.24.0 - Cloud provider or hardware configuration: n/a
- OS (e.g:
cat /etc/os-release): Linux, but relevant to all OSs
Prompted by https://github.com/kubernetes/website/issues/33764
Is it reasonable to get the client version by running kubectl version -oyaml --client? The example output is:
clientVersion:
buildDate: "2022-05-03T13:36:49Z"
compiler: gc
gitCommit: 4ce5a8954017644c5420bae81d72b09b735c21f0
gitTreeState: clean
gitVersion: v1.24.0
goVersion: go1.18.1
major: "1"
minor: "24"
platform: darwin/amd64
kustomizeVersion: v4.5.4
We could extract the kubectl version like the following:
$ kubectl version -oyaml --client|awk '/gitVersion/{print $2;}'
v1.24.0
I believed that kubectl version -oyaml --client is deprecated because --client is deprecated. However, if --client is only deprecated for the default output format then these both work:
kubectl version --client -ojson | jq -r .clientVersion.gitVersion
kubectl version -oyaml --client|awk '/gitVersion/{print $2;}'
A shame to require external tools (jq, awk) to check the install.
kubectl version --client --short would work OK apart from the deprecation warning.
Actually, we could mention that both
kubectl version --client -ojson | jq -r .clientVersion.gitVersionkubectl version -oyaml --client|awk '/gitVersion/{print $2;}'
are viable options, and let users choose.
It sounds like this is expected behavior, and that we should document the warning as something for readers to be aware of and ignore if they are deploying v1.24 kubectl.
I believed that kubectl version -oyaml --client is deprecated because --client is deprecated.
Nope, the deprecated flag is actuall --short as the output of kubectl version --short would become the default in the future, the flag --client is not deprecated.
we should document the warning as something for readers to be aware of and ignore if they are deploying v1.24 kubectl.
Yeah that would be great.
the flag --client is not deprecated
$ kubectl version --client
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
I think people will see that and think that kubectl version --client is deprecated: they run it, and they see a deprecation warning.
I think people will see that and think that
kubectl version --clientis deprecated
Ah that's because --short is set by default and is deprecated now, so users would see this warning.
As discussed on slack, and earlier today during bug scrub, we need to:
- document that the deprecation warning is expected
- document for first timers not having cluster to use
--client.
@sftim I looked into when you hit 30s timeout, that's when you have invalid kubeconfig pointing to a valid host but wrong port, in all other cases (missing host, missing kubeconfig, invalid host) you should get instant response.
/triage accepted /help-wanted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen This issue was accepted
/remove-lifecycle rotten
@sftim: Reopened this issue.
In response to this:
/reopen This issue was accepted
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Also see https://github.com/kubernetes/website/pull/39431
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten