kubectl
kubectl copied to clipboard
kubectl fails to aggregate cluster role selectors when specifying multiple --aggregation-rule flags
What happened?
I've created 3 cluster roles, one that has read access to pods, one that has access to delete pods and one that will aggregate the rules between the first two. When creating the aggregator role with imperative commands, the resulted role fails to aggregate the permissions properly even with separate arguments for the aggregation rules, the issue happens because the resulted selector seems to look for roles that have a combination of all the labels across all aggregation rules instead of roles that have the label combination in either aggregation rule, this is counter-intuitive since I've specified multiple aggregation rules.
What did you expect to happen?
Creating the cluster role that aggregates the rules should work properly with imperative commands without me having to manually edit the resource.
How can we reproduce it (as minimally and precisely as possible)?
This was a step by step process of me reproducing the issue:
kind-learning/default k create clusterrole test --verb=get,watch,list --resource=pods
clusterrole.rbac.authorization.k8s.io/test created
kind-learning/default k create clusterrole test2 --verb=delete --resource=pods
clusterrole.rbac.authorization.k8s.io/test2 created
kind-learning/default k label clusterrole test reader=true
clusterrole.rbac.authorization.k8s.io/test labeled
kind-learning/default k label clusterrole test2 delete=true
clusterrole.rbac.authorization.k8s.io/test2 labeled
kind-learning/default k describe clusterrole test
Name: test
Labels: reader=true
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list]
kind-learning/default k describe clusterrole test2
Name: test2
Labels: delete=true
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [delete]
kind-learning/default k create clusterrole test3 --aggregation-rule=reader=true --aggregation-rule=delete=true
clusterrole.rbac.authorization.k8s.io/test3 created
kind-learning/default k describe clusterrole test3
Name: test3
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
The issue lies in the way the cluster role is generated:
kind-learning/default k get clusterrole test3 -o yaml
aggregationRule:
clusterRoleSelectors:
- matchLabels:
delete: "true"
reader: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: "2024-01-14T14:19:25Z"
name: test3
resourceVersion: "1508"
uid: b138a790-d2b0-4bae-9e1f-7d51179599ec
rules: null
The first matchLabels rule will look for roles that have both the delete=true and reader=true labels, but instead these should be separate rules, as specified in the creation of the clusterrole called test3 with two separate --aggregation-rule arguments.
Editing the cluster role to have two separate matchLabels selectors fixes the issue:
kind-learning/default k edit clusterrole test3
clusterrole.rbac.authorization.k8s.io/test3 edited
kind-learning/default k get clusterrole test3 -o yaml
aggregationRule:
clusterRoleSelectors:
- matchLabels:
delete: "true"
- matchLabels:
reader: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: "2024-01-14T14:19:25Z"
name: test3
resourceVersion: "2066"
uid: b138a790-d2b0-4bae-9e1f-7d51179599ec
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
kind-learning/default k describe clusterrole test3
Name: test3
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [delete get watch list]
The performed edit was this:
- matchLabels:
- delete: "true"
- reader: "true"
+ matchLabels:
+ delete: "true"
+ - matchLabels:
+ reader: "true"
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0
Cloud provider
cat kind-cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta3
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
- role: worker
- role: worker
- role: worker
extraPortMappings:
- containerPort: 30080
hostPort: 30080
protocol: TCP
OS version
$ uname -a
Darwin Alexandrus-MacBook-Pro.local 21.6.0 Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:23 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T6000 arm64
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/sig cli
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /transfer kubectl
/close
The root of the issue here is that the create command makes some assumptions based on the provided flags. This is because the create commands are meant to be introductory commands to help acclimate people to kubernetes. You should use the apply command and write the manifest as you wish it to be for this use case. To change the behavior of how the create command works today, we would have to make a breaking change to the flag behavior, or add a new flag to specify that the aggregation should be an or command instead of an and command. SIG-CLI does not wish to add more flags to the imperative commands. If there was any documentation implying this would be an or operation please let me know so I can follow up and correct that documentation.
@mpuckett159: Closing this issue.
In response to this:
/close
The root of the issue here is that the create command makes some assumptions based on the provided flags. This is because the create commands are meant to be introductory commands to help acclimate people to kubernetes. You should use the apply command and write the manifest as you wish it to be for this use case. To change the behavior of how the create command works today, we would have to make a breaking change to the flag behavior, or add a new flag to specify that the aggregation should be an or command instead of an and command. SIG-CLI does not wish to add more flags to the imperative commands. If there was any documentation implying this would be an or operation please let me know so I can follow up and correct that documentation.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.