gatekeeper-library
gatekeeper-library copied to clipboard
general/containerresource policy is not working for requests
I wanted to use these templates so I deployed them on the test cluster and tested if it works. I installed the gatekeeper using the helm.
I set both requests and limits to enforce CPU and memory.
Expected Behaviour: it should deny if any of the CPU or memory is missing from either limits or requests Actual Behaviour: it denies a request if CPU or memory is missing from limits but it doesn't deny if CPU or memory is missing from requests
The code for the two fields is basically identical, so it should work for both:
https://github.com/open-policy-agent/gatekeeper-library/blob/6c3b8a75db8d7b3f0e10ea13d872eef9f3c69d48/library/general/containerresources/template.yaml#L61-L79
Can you post a copy of the constraint and constraint template you are using along with any resource that isn't being enforced?
To avoid eventual consistency concerns, can you also make sure your unexpected success happens after a successful deny? That way we know the behavior wasn't simply because the constraint was still being ingested.
Here is a link to both Template and Constraint
I think the policy is working fine, when I say kubectl describe pod PODNAME
it shows the request is set and the values are the same as for limits. Is this the default behavior of Kubernetes to put the request equal to the limit if the request is not defined? But still, the user is not bound to define the request if the limit is defined.
But still, the user is not bound to define the request if the limit is defined.
The code for the template requires both to be defined, so long as the constraint defines them.
Per my above comment we'd need:
- A concrete constraint/template as it exists on the cluster
- An example resource that you expect should have been rejected, but wasn't
- To avoid eventual consistency concerns, can you also make sure your unexpected success happens after a successful deny? That way we know the behavior wasn't simply because the constraint was still being ingested.
I think everything is working fine and constraint is enforced. If I remove the limit then the request is enforced and if I add both request and limit, both are enforced. The only issue was that request is set equal to the limit by default before feeding it to OPA so to OPA it looks like the user has defined both requests and limits
request is set equal to the limit by default before feeding it to OPA
Can you share an example of the resource before it was deployed and then what it looks like after itβs running in the cluster? It almost sounds like something is mutating the request on your cluster to make the request equal to the limit.
Before Deployment
kind: Pod
metadata:
creationTimestamp: null
labels:
run: httpgo
name: httpgo
spec:
containers:
- image: cmssw/httpgo
name: httpgo
resources:
limits:
cpu: "200m"
memory: "1Gi"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
After Deployment
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpgo
name: httpgo
namespace: default
spec:
containers:
- image: cmssw/httpgo
imagePullPolicy: Always
name: httpgo
resources:
limits:
cpu: 200m
memory: 1Gi
requests:
cpu: 200m
memory: 1Gi
dnsPolicy: ClusterFirst
enableServiceLinks: true
@ritazh do you know if that's vanilla K8s behavior? I don't think it is.
@aamirali-dev Unfortunately if that's what K8s is sending to Gatekeeper, we have no way of knowing what the original object looked like. If this is not vanilla behavior, you could try to find what's causing this to happen, which would be specific to your cluster provider and individual setup.
You can also use the gator test
command to validate resources locally, which will give you access to the original object as-authored:
https://open-policy-agent.github.io/gatekeeper/website/docs/gator#the-gator-test-subcommand
@ritazh do you know if that's vanilla K8s behavior? I don't think it is.
@maxsmythe @aamirali-dev Have been having the same question, and it seems that it is vanilla k8s behaviour to copy limits to the requests if they are undefined
If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit.
https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-specify-a-cpu-limit-but-do-not-specify-a-cpu-request
Thanks for the info!
It sounds like this is not something an admission webhook could check for. Something like gator test
(which evaluates the raw config) would be able to detect this scenario in a CI/CD pipeline.
This issue/PR has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.