kubectl
kubectl copied to clipboard
`kubectl get all` does not return custom resources on first use (works subsequently)
What happened:
The first time (see below) that I run kubectl get all
, it returns "No resource found in {namespace} namespace.
kubectl get all
No resources found in {namespace} namespace.
If I kubectl get {resource}
knowing that there are custom resources in the namespace, this command returns correctly:
kubectl get customers
NAME NAME DOMAIN
42a3 Foo foo.com
872f Bar bar.com
NOTE I realized that I've duplicated the column name
NAME
but think this should not the problem, because...
Then (!) rerunning kubectl get all
will subsequently correctly (!) return all the custom resources in the namespace:
kubectl get all
NAME NAME DOMAIN
42a3 Foo foo.com
872f Bar bar.com
...
NOTE There are multiple resources types returned.
What you expected to happen:
I expect kubectl get all
to return all the customer resources in the namespace consistently (first time too).
How to reproduce it (as minimally and precisely as possible):
Every day (!), I deploy a new Google Kubernetes Engine and deploy self-developed CRDs and an operator to the cluster.
For the past few weeks (unfortunately I don't have an audit log of software updates), when debugging, I've noticed the observed behavior (every day the first time I try the command).
Before this, kubectl get all
returned the list of custom resources correctly on first use.
Anything else we need to know?:
There are only custom resources in the namespace. There are no Pods etc.
It should not affect the behavior but, I consistently (tomorrow I won't to see whether it makes a difference) kubectl config set-context
before I try kubectl get all
:
kubectl config set-context --current --namespace={namespace}
It should not affect the behavior but, I'm also forcing the use of gke-gcloud-auth-plugin
with an updated ${HOME}/.bashrc
:
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
Environment:
- Kubernetes client and server versions (use
kubectl version
):
kubectl version --output=yaml
clientVersion:
buildDate: "2022-05-24T12:26:19Z"
compiler: gc
gitCommit: 3ddd0f45aa91e2f30c70734b175631bec5b5825a
gitTreeState: clean
gitVersion: v1.24.1
goVersion: go1.18.2
major: "1"
minor: "24"
platform: linux/amd64
kustomizeVersion: v4.5.4
serverVersion:
buildDate: "2022-06-03T03:28:59Z"
compiler: gc
gitCommit: c47264b4fe4c0eee76c51b69f1bfcc167fc40c7b
gitTreeState: clean
gitVersion: v1.24.0-gke.1801
goVersion: go1.18.1b7
major: "1"
minor: "24"
platform: linux/amd64
- Cloud provider or hardware configuration:
Google Cloud
Kubernetes Engine
Version: 1.24.0-gke.1801
- OS (e.g:
cat /etc/os-release
):
@DazWilkin: This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
After applying new CRDs, it takes some time for them to be aggregated in discovery endpoint(if there are lots of CRDs, it takes more). It is possible that kubectl get all
does not return in first request(or does it get timeout?, kubectl get all -v=9
shows what is happening).
After calling kubectl get {specific_CRDs}
, you are forcing discovery caching for these specific CRDs and probably that's why, subsequent kubectl get all
returns these.
Today, I left the cluster running (w/ CRDs deployed and custom resources created) for 3 hours before attempting to enumerate the custom resources.
I used the --namespace
flag rather than kubectl config set-context ...
The behavior is unchanged from what I described.
kubectl get all --namespace={namespace}
No resources found in {namespace}
kubectl get all --namespace={namespace}
No resources found in {namespace}.
kubectl get customers --namespace={namespace}
NAME NAME DOMAIN
42a3 Foo foo.com
872f Bar bar.com
kubectl get all --namespace={namespace}
NAME NAME DOMAIN
42a3 Foo foo.com
872f Bar bar.com
I forgot to add --v=9
(will try that tomorrow) but -- IIUC -- that I'm getting No resources found
suggests that the command is working correctly.
Hmmmm 🤔
Using -v=9
-
kubectl get all --namespace={namespace} -v=9
loader.go:372] Config loaded from file: [REDACTED]
round_trippers.go:466] curl -v -XGET -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl/v1.24.2 (linux/amd64) kubernetes/f66044f" 'https://34.106.24.10/api/v1/namespaces/[REDACTED]/pods?limit=500'
round_trippers.go:510] HTTP Trace: Dial to tcp:34.106.24.10:443 succeed
round_trippers.go:553] GET https://34.106.24.10/api/v1/namespaces/[REDACTED]/pods?limit=500 200 OK in 188 milliseconds
It repeats GETS
for the non-custom resources (replicationcontrollers
, services
etc.) but doesn't appear to check for non-custom resources.
-
kubectl get {customer-resource} --namespace={namespace} -v=9
loader.go:372] Config loaded from file: [REDACTED]
discovery.go:214] Invalidating discovery information
round_trippers.go:466] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.24.2 (linux/amd64) kubernetes/f66044f" 'https://34.106.24.10/api?timeout=32s'
round_trippers.go:510] HTTP Trace: Dial to tcp:34.106.24.10:443 succeed
round_trippers.go:553] GET https://34.106.24.10/api?timeout=32s 200 OK in 171 milliseconds
round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 48 ms TLSHandshake 55 ms ServerProcessing 63 ms Duration 171 ms
round_trippers.go:577] Response Headers:
round_trippers.go:580] Audit-Id: 456aff2c-84eb-4343-8b13-0eb4ee45ac21
round_trippers.go:580] Cache-Control: no-cache, private
round_trippers.go:580] Content-Type: application/json
round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f9f235c8-7631-4cbc-86ee-08b8880d0902
round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 8588b28e-012e-4c56-b921-d9a2f30e4523
round_trippers.go:580] Content-Length: 132
round_trippers.go:580] Date: Fri, 24 Jun 2022 16:48:24 GMT
request.go:1073] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"10.180.0.2:443"}]}
round_trippers.go:466] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.24.2 (linux/amd64) kubernetes/f66044f" 'https://34.106.24.10/apis?timeout=32s'
round_trippers.go:553] GET https://34.106.24.10/apis?timeout=32s 200 OK in 50 milliseconds
round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 50 ms Duration 50 ms
round_trippers.go:577] Response Headers:
round_trippers.go:580] Date: Fri, 24 Jun 2022 16:48:24 GMT
round_trippers.go:580] Audit-Id: aa0f4f2a-8b9d-472d-9f94-62365006788b
round_trippers.go:580] Cache-Control: no-cache, private
round_trippers.go:580] Content-Type: application/json
round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f9f235c8-7631-4cbc-86ee-08b8880d0902
round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 8588b28e-012e-4c56-b921-d9a2f30e4523
request.go:1073] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[REDACTED]}
NOTE
Invalidating discovery information
Response Body
enumerates all resources including my custom resources
-
kubectl get all --namespace={namespace} -v=9
loader.go:372] Config loaded from file: [REDACTED]
discovery.go:214] Invalidating discovery information
round_trippers.go:466] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.24.2 (linux/amd64) kubernetes/f66044f" 'https://34.106.24.10/api?timeout=32s'
round_trippers.go:510] HTTP Trace: Dial to tcp:34.106.24.10:443 succeed
round_trippers.go:553] GET https://34.106.24.10/api?timeout=32s 200 OK in 189 milliseconds
round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 47 ms TLSHandshake 48 ms ServerProcessing 88 ms Duration 189 ms
round_trippers.go:577] Response Headers:
round_trippers.go:580] Audit-Id: 8b0f694c-0ffd-4213-8cb4-b1fb22489ec8
round_trippers.go:580] Cache-Control: no-cache, private
round_trippers.go:580] Content-Type: application/json
round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f9f235c8-7631-4cbc-86ee-08b8880d0902
round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 8588b28e-012e-4c56-b921-d9a2f30e4523
round_trippers.go:580] Content-Length: 132
round_trippers.go:580] Date: Fri, 24 Jun 2022 16:50:21 GMT
request.go:1073] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"10.180.0.2:443"}]}
round_trippers.go:466] curl -v -XGET -H "User-Agent: kubectl/v1.24.2 (linux/amd64) kubernetes/f66044f" -H "Accept: application/json, */*" 'https://34.106.24.10/apis?timeout=32s'
round_trippers.go:553] GET https://34.106.24.10/apis?timeout=32s 200 OK in 47 milliseconds
round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 47 ms Duration 47 ms
round_trippers.go:577] Response Headers:
round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 8588b28e-012e-4c56-b921-d9a2f30e4523
round_trippers.go:580] Date: Fri, 24 Jun 2022 16:50:21 GMT
round_trippers.go:580] Audit-Id: 912c67cc-b865-411f-9a26-e115ea401fe2
round_trippers.go:580] Cache-Control: no-cache, private
round_trippers.go:580] Content-Type: application/json
round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f9f235c8-7631-4cbc-86ee-08b8880d0902
request.go:1073] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[REDACTED]}
...
round_trippers.go:466] curl -v -XGET -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl/v1.24.2 (linux/amd64) kubernetes/f66044f" 'https://34.106.24.10/apis/{api}/{version}/namespaces/{namespace}/{resource}?limit=500'
...
NOTE
- Once again it invalidates discovery information
Response Body
enumerates all resources including my custom resourcesGET
's all the custom resources
I've retained the logs but would prefer not to share them publicly.
We think this is a known issue with a PR in flight.
/assing @soltysh
https://github.com/kubernetes/kubernetes/pull/96771
It might be similar to https://github.com/kubernetes/kubernetes/pull/96771, but I think the category expander works differently. I'll have to play with it a bit more to get to the root cause.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
One last note regarding this issue;
I tested same steps on latest version and it worked as expected.