Misleading error message instead of timeout exception
What happened:
PS C:\Users\onovak> kubectl get cm --all-namespaces -l grafana_dashboard=true -o json
error: the server doesn't have a resource type "cm"
What you expected to happen:
Unable to connect to the server: dial tcp x.x.x.x:443: i/o timeout would be a way more expected outcome here
How to reproduce it (as minimally and precisely as possible):
- corrupt your local network configuration to NOT have an access to the k8s cluster (in my case it was missing VPN connection)
- run
kubectl get cm --all-namespaces -l grafana_dashboard=true -o json - wait a bit (fixing this too fast would cause no error, fixing this too slow would cause a connection error)
- fix network configuration
Anything else we need to know?:
I was able to reproduce this 3 times in like 15-20minutes, but I have no easily reproducible scenario :(
Environment:
- Kubernetes client and server versions (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.13-eks-84b4fe6", GitCommit:"e1318dce57b3e319a2e3fecf343677d1c4d4aa75", GitTreeState:"clean", BuildDate:"2022-06-09T18:22:07Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: aws eks
- OS (e.g:
cat /etc/os-release): Microsoft Windows 10 Pro, 10.0.19044 N/A Build 19044
@novak-as: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This may already be addressed from when we increased the discovery cache ttl.
https://github.com/kubernetes/enhancements/issues/3352 may also address it.
Let us know if you're still seeing this in a release or so.
/close
@eddiezane: Closing this issue.
In response to this:
This may already be addressed from when we increased the discovery cache ttl.
https://github.com/kubernetes/enhancements/issues/3352 may also address it.
Let us know if you're still seeing this in a release or so.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.