kubectl
kubectl copied to clipboard
auth/can-i: check subresource if it exists and belongs to resource
What happened:
The following command returns no
:
$ kubectl auth can-i create pods --subresource=exec
Now try with random or non-existed subresource like foobarbaz
:
$ kubectl auth can-i create pods --subresource=foobarbaz
Now it returns true
. And same for:
$ kubectl auth can-i create pods/foobarbaz
What you expected to happen:
$ kubectl auth can-i create pods --subresource=thisisnotexist
It should return something like: Subresource "thisisnotexist" does not belong to "pods"
How to reproduce it (as minimally and precisely as possible):
Just try the commands in your cluster.
Anything else we need to know?:
Environment:
- Kubernetes client and server versions (use
kubectl version
): Client:v1.24.0
, Server:v1.20
- Cloud provider or hardware configuration:
on-prem
- OS (e.g:
cat /etc/os-release
):Ubuntu 20.04
/assign @atiratree
/triage accepted
I think it should be enough to ouput a warning instead of an error: I started a PR that fixes that: https://github.com/kubernetes/kubernetes/pull/110752
nit:
kubectl auth can-i create pods/foobarbaz
this is checking a pod named foobarbaz
and not a subresource
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale