conftest
conftest copied to clipboard
conftest test does not output all policy names
Hi,
Currently, conftest test resultDir --output table --all-namespaces
results in:
+---------+-----------------------------------------------------+--------------------------------+
| RESULT | FILE | MESSAGE |
+---------+-----------------------------------------------------+--------------------------------+
| success | resultDir\test\templates\deployment.yaml | |
| success | resultDir\test\templates\deployment.yaml | |
| failure | resultDir\test\templates\deployment.yaml | could not find mandatory |
| | | label in deployment spec: |
| | | some-mandatory-label |
| failure | resultDir\test\templates\deployment.yaml | could not find mandatory |
| | | label in deployment spec: |
| | | another-label |
| success | resultDir\test\templates\service.yaml | |
| success | resultDir\test\templates\service.yaml | |
| success | resultDir\test\templates\service.yaml | |
+---------+-----------------------------------------------------+--------------------------------+
Requesting for a feature that will enable us visualize successful policies as well in the message column.
And, something similar for other --output
implementations as well.
Reference link to discussion: here.
@Biswajee could you provide a small example of a Rego policy and the result you would expect to see when it succeeds?
Theres been some discussion on this in the past, but the success messages always seemed to be noisy since the only thing we'd be able to report on is the name of the rule that succeeded. For most, the rules are just deny
with a message. However, if your rule names are a bit more descriptive, I could see value in this through a flag.
@jpreese Thanks for picking up the request. I would wish to have the name of policy file being returned as success message since we generally have descriptive filenames and policies defined in them. Consider the example policy:
filename: policy/enforce-labels.rego
package kubernetes.admission
mandatory_labels = {"dev" , "prod"}
deny [msg] {
input.kind == "Deployment"
not mandatory_labels[input.metadata.labels]
msg := sprintf("Mandatory labels %s are not present", [mandatory_labels])
}
I can imagine conftest test resultDir --output table --all-namespaces
with some flag to print successful policy file names.
+---------+-----------------------------------------------------+--------------------------------+
| RESULT | FILE | MESSAGE |
+---------+-----------------------------------------------------+--------------------------------+
| success | resultDir\test\templates\deployment.yaml | enforce-labels.rego |
+---------+-----------------------------------------------------+--------------------------------+
This can help users understand that the policy was evaluated and at the same time provides some positive vibes that some policies succeeded 😄.
Incase, you are looking for a longer filename, here's one: allow_only_https_traffic_storage_account_cloud.rego
Thanks for the clarification, @Biswajee! We may need to provide the option to display filename and/or policy name. Showing the filename makes sense in this scenario, but other users may have less files but more verbose rule names.
Hi @jpreese,
I agree with you on having lesser filenames but more verbose rules. But, a flag for displaying filenames will be really cool since we maintain verbose policy filenames and often a single rule per file. I could find examples of naming policy files in the open-policy-agent/gatekeeper-library
and would like to highlight a policy repository where one policy per file is used extensively raspbernetes/k8s-security-policies.
Oh sorry if I wasn't clear. I agree that showing the filename is a good thing to have. But we should also provide a way to include the rule name for users who do the opposite (generic policy files with very granular rule names). e.g.
Message: enforce-labels.rego // this works great for named files
Message: deny.rego // less so when the rule is deny_enforce_labels
I'm interested in this functionality as well. I'm more specifically looking for the successful rules to be enumerated in the JUnit output so my CI engine can report on it.
Related to this issue as well (specifically my comments about JUnit which echo @jschwanz )
I think I'm trying to figure out the same thing as @jcmcken and @jschwanz, but I got a little confused between this issue and #258, and wanted to recap the current state of things for conftest 0.30.0.
If I use granular deny_xxx rules, all in the same .rego file and package name, currently conftest output:
- Displays independent results for each deny_xxx rule, including rules that succeed (generate no messages)
- Does not give identifying information distinguishing the result for successful rules from each other
That's what other users experience, right?
Here's my example.
If I use the deny_xxx rule naming feature and create two deny_xxx rules:
deny_host_ports_containers[msg] {
count(host_ports_container) > 0
msg := sprintf("containers requiring host ports %v",[host_ports_container])
}
deny_escalation_containers[msg] {
count(escalation_container) > 0
msg := sprintf("containers requiring privilege escalation %v",[escalation_container])
}
Then conftest (version 0.30.0) will show two results for each input file:
$ conftest test apps_v1_deployment_* -o table -n k8spodsrequirepodsecurity -p ../k8s-arch-ent-pocs-dev-cluster/shared/manifests/gatekeeper/test/data -p ../k8s-arch-ent-pocs-dev-cluster/shared/manifests/gatekeeper/template
+---------+----------------------------------------------+---------------------------+--------------------------------+
| RESULT | FILE | NAMESPACE | MESSAGE |
+---------+----------------------------------------------+---------------------------+--------------------------------+
| success | apps_v1_deployment_organization-service.yaml | k8spodsrequirepodsecurity | SUCCESS |
| success | apps_v1_deployment_organization-service.yaml | k8spodsrequirepodsecurity | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | k8spodsrequirepodsecurity | SUCCESS |
| failure | apps_v1_deployment_department-service.yaml | k8spodsrequirepodsecurity | containers requiring privilege |
| | | | escalation {"service"} |
| success | apps_v1_deployment_employee-service.yaml | k8spodsrequirepodsecurity | SUCCESS |
| success | apps_v1_deployment_employee-service.yaml | k8spodsrequirepodsecurity | SUCCESS |
+---------+----------------------------------------------+---------------------------+--------------------------------+
The "junit" output is similar to the "table" output - basically MESSAGE is embedded in the name of each of the 6 <testcase>
s.
The failing rule (deny_escalation_containers
) has a chance to produce a message that indicates which rule it is, but the succeeding rule (deny_host_ports_containers
) does not, and only says "success".
If the rule name (deny_escalation_containers vs deny_host_ports_containers) was somehow included in the individual result (in the RESULT, in the MESSAGE, or in some new column, that would improve things for me.
Do others agree?
Thanks!
After some more testing, I see how conftest's NAMESPACE column gives you a way to distinguish successful rules, if you're willing to break up your rules into fine-grained Rego packages (and files).
If I take what was a single Rego file and package, with a single violation
rule, and split into multiple packages, each with their own package name and violation
rule, then NAMESPACE serves the purpose.
7 Rego packages:
$ ( cd ../k8s-arch-ent-pocs-dev-cluster/shared/manifests/gatekeeper/unified-policy ; find . -name src.rego -exec grep package {} +)
./pod-security-critical-policies/container_deny_escalation/src.rego:package container_deny_escalation
./pod-security-critical-policies/container_deny_host_ports/src.rego:package container_deny_host_ports
./pod-security-critical-policies/container_deny_non_root_disabling/src.rego:package container_deny_non_root_disabling
./pod-security-critical-policies/container_deny_privileged/src.rego:package container_deny_privileged
./pod-security-critical-policies/pod_deny_host_network/src.rego:package pod_deny_host_network
./pod-security-critical-policies/pod_deny_host_path/src.rego:package pod_deny_host_path
./pod-security-critical-policies/pod_deny_host_pid/src.rego:package pod_deny_host_pid
conftest 0.30.0 run:
$ conftest test apps* -o table --all-namespaces -p ../k8s-arch-ent-pocs-dev-cluster/shared/manifests/gatekeeper/unified-policy
+---------+----------------------------------------------+-----------------------------------+--------------------------------+
| RESULT | FILE | NAMESPACE | MESSAGE |
+---------+----------------------------------------------+-----------------------------------+--------------------------------+
| failure | apps_v1_deployment_employee-service.yaml | container_deny_host_ports | containers requiring host |
| | | | ports {"service"} |
| success | apps_v1_deployment_organization-service.yaml | container_deny_host_ports | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | container_deny_host_ports | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | container_deny_non_root_disabling | SUCCESS |
| failure | apps_v1_deployment_employee-service.yaml | container_deny_non_root_disabling | containers disabling non-root |
| | | | validation {"service"} |
| success | apps_v1_deployment_organization-service.yaml | container_deny_non_root_disabling | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | container_deny_privileged | SUCCESS |
| success | apps_v1_deployment_employee-service.yaml | container_deny_privileged | SUCCESS |
| success | apps_v1_deployment_organization-service.yaml | container_deny_privileged | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | pod_deny_host_network | SUCCESS |
| success | apps_v1_deployment_employee-service.yaml | pod_deny_host_network | SUCCESS |
| success | apps_v1_deployment_organization-service.yaml | pod_deny_host_network | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | pod_deny_host_path | SUCCESS |
| success | apps_v1_deployment_employee-service.yaml | pod_deny_host_path | SUCCESS |
| success | apps_v1_deployment_organization-service.yaml | pod_deny_host_path | SUCCESS |
| success | apps_v1_deployment_department-service.yaml | pod_deny_host_pid | SUCCESS |
| success | apps_v1_deployment_employee-service.yaml | pod_deny_host_pid | SUCCESS |
| success | apps_v1_deployment_organization-service.yaml | pod_deny_host_pid | SUCCESS |
| failure | apps_v1_deployment_department-service.yaml | container_deny_escalation | containers requiring privilege |
| | | | escalation {"service"} |
| success | apps_v1_deployment_employee-service.yaml | container_deny_escalation | SUCCESS |
| success | apps_v1_deployment_organization-service.yaml | container_deny_escalation | SUCCESS |
+---------+----------------------------------------------+-----------------------------------+--------------------------------+
Each input file has exactly 7 results, 1 per Rego package, whether the package's violation
rule returned nothing (SUCCESS) or a single violation message, and the SUCCESSes are distinguished by the package name showing up in the NAMESPACE column.
This example is more in keeping with the Rego package organization in the konstraint project's examples, lots of fine-grained Rego packages.
Does conftest's current behavior of including the Rego package name in NAMESPACE help the other folks commenting on this issue?
Or do the other folks need to avoid fine-grained Rego packages? Like either of these situations:
- Multiple Rego files sharing the same package name
- Single Rego file (and package) with many fine-grained deny_xxx rules
Are there any downsides (performance etc.) to splitting the Rego code into so many Rego packages?
Thanks!
@jdoylei My impression of the Konstraint repo's organization is that it's only organized that way because of how Gatekeeper is designed. But Gatekeeper's organization is not necessarily a model everyone should follow. For example, Gatekeeper doesn't support the concept of "shared library" code that can be re-used across policies. This necessitates copying and pasting shared library rules across many different constraint templates. And splitting your Rego code across many tiny files. Needless to say, it's quite a pain to work with in practice.