kubewarden-controller
kubewarden-controller copied to clipboard
Feature Request: do not apply certain policies to selected users/groups/service accounts
Is your feature request related to a problem?
This is a feature request, based on a conversation happened on slack.
Hi! that's an interesting question. The user name, uid, and groups are present on the userInfo key of the admissionReviews that come from k8s apiserver, so it's possible, but AFAIK right now we don't have any policy written by the Kubewarden team that makes use of it. It could be doable by writing a policy, running Rego policy or a Kyverno one (experimental still) in Kubewarden. (edited)
At the moment we as team plans to have the following fields to be part of settings of every policy
policy_not_applied_to_groups
,policy_not_applied_to_service_accounts
Both of these are arrays. In our implementation, we compare these with thevalidationRequest.Request.UserInfo.Groups
andvalidationRequest.Request.UserInfo.Username
. If it matches we simply dokubewarden.AcceptRequest()
. Else, we do the other validation and accepts requests only if the validations pass. I want to know if there are other ways to do this without mentioning the groups or ids in the settings (edited)
Solution you'd like
No response
Alternatives you've considered
No response
Anything else?
No response
One way to solve this problem is to extend each policy to have an additional check, like the user mentioned. This is however cumbersome.
Other possible solutions I can imagine:
Policy Server Configuration
In the same way we can instruct a PolicyServer to always accept validation requests originating from a specific namespace, we could add a setting that accepts all the requests originated by a set of users/groups/service accounts.
I don't particularly like that, because it's too broad and not visible enough during the policy deployment.
Policy Configuration
We could extend the ClusterAdmissionPolicy and AdmissionPolicy CRDs to have a new field that allows to define which users/groups/service account the policy should not apply to.
This is better than the previous solution because it's granular and is more evident to the admins.
Combine multiple policies together
We could define a new type of policy that performs logical AND/OR operations of multiple policies to determine the final outcome.
For example, we could write a new policy that only looks at the user/group/service account that originated a validation request. The policy would take a list of trusted parties via its configuration. The policy would allow the request if it has been created by a trusted author.
Now we can take this policy and put it in OR
condition with any other Kubewarden policy, like privileged containers: accept if trusted-creator
OR privileged-containers
. In this case a trusted user would be allowed to create privileged containers.
I like this approach because it's really modular. We don't have to keep adding new logic to the PolicyServer and to the CRDs. On the other hand we have to understand how this kind of policies would be specified (should we define a new type of policy CRD?) and how this would be implemented.
Long story short, this is something that would need a dedicated RFC
I like the 3rd option, in terms of the control that the admins gets. But in my view this sensitive data. So I would suggest a way where this data is not part of the git commits. The key of this data can be used as part of the policy but the value itself should be configured inside the server, where data is pulled from an external secret.
Yes, the implementation of the policy that checks for the trusted users/groups/service accounts is entirely up to the policy author. We could of course provide a reference implementation.
There could be different ways to feed this data (the allow list) into the policy. Doing that by hard coding the values inside of the policy would be the worse way.
A simpler approach would be to leverage the policy settings. Basically this data would be read specified inside of the ClusterAdmissionPolicy
/ AdmissionPolicy
yaml.
Another option would be to make this policy context aware and read the allow list from a specific Configmap or Secret.
I would personally prefer the settings route because:
- Policy definition + configuration stays in the same place. When using a Secret/Configmap there's a layer of indirection
- I don't think this data is confidential, hence worth of a Secret. Keep in mind that you can remove read access to the
ClusterAdmissionPolicy
/AdmissionPolicy
resources via RBAC. Users won't be able to see their configuration options if you want to - Context-aware policies hit the Kubernetes API. Sure, we do some caching, but this could increase the load on the API server if your policy is invoked often. In this case, given the previous points. I don't think this is worth.
Again, this is my personal taste. You could write your own policy that works in a different way :)
I would go for option #3