CIS compliance check 4.2.2 seems to be misreporting
What steps did you take and what happened:
- Installed trivy-operator via the helm chart
- Printed out the CIS compliance results:
$ kubectl get compliance cis -o=jsonpath='{.status}' | jq '.summaryReport.controlCheck[] | select(.totalFail != 0 and .totalFail != null)'
{
"id": "4.2.2",
"name": "Ensure that the --authorization-mode argument is not set to AlwaysAllow",
"severity": "CRITICAL",
"totalFail": 3
}
While manually auditing our nodes, it turns out that we don't set AlwaysAlllow and actually have a compliant setting.
What did you expect to happen:
Expected that result not to fail.
Environment:
- Trivy-Operator version (use
trivy-operator version): 0.17.1 - Helm App version: 0.19.1
- Kubernetes version (use
kubectl version):
Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.7-eks-4f4795d", GitCommit:"3719c8491f81867f591e895a43b4f5aab4145794", GitTreeState:"clean", BuildDate:"2023-10-20T23:21:04Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
- OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Amazon Linux 2
@JAORMX can you please advice how you check the value ? , here is how our node collector extract the info
@JAORMX can you please advice how you check the value ? , here is how our node collector extract the info
Thanks for the link @chen-keinan ! So, in this case, the --authorization-mode flag is not set at all in the kubelet's command line. Instead, they rely on setting it in the kubelet config which is in /etc/kubernetes/kubelet/kubelet-config.json. In that file you'll find the relevant authorization key, with a mode setting.
@JAORMX can you please advice how you check the value ? , here is how our node collector extract the info
Thanks for the link @chen-keinan ! So, in this case, the
--authorization-modeflag is not set at all in the kubelet's command line. Instead, they rely on setting it in the kubelet config which is in/etc/kubernetes/kubelet/kubelet-config.json. In that file you'll find the relevantauthorizationkey, with amodesetting.
thanks for your input , I suspected it as well , I'll update the checks and release a new k8s-node-collector
@chen-keinan feel free to ping me once you have a review up. Thanks for checking this out
@JAORMX please let me know if v0.18.0-rc solve this issue
Will do, after the holidays. I'm back to work on Jan 2
@chen-keinan it did not work. I still get that issue after the v0.18.0 upgrade. There's also other funky errors that are not applicable, such as the report complaining about API Server config permissions. We don't even have that config as we don't run the API server (it's a managed k8s).
Thanks , strange ,I test it on managed k8s as well, I'll have another look.
- Can you please add the permission error?
- You are using the helm chart for deployment ?
Yes, I'm using the helm chart.
@chen-keinan I reverified and I had misread the report. The permissions are not reported as an error. However, the original issue still is being reported:
{
"id": "4.2.2",
"name": "Ensure that the --authorization-mode argument is not set to AlwaysAllow",
"severity": "CRITICAL",
"totalFail": 2
}
And it shouldn't. It should run a re-scan once a day right? I did wait a full day after the upgrade.
Its cron based you can configure it https://github.com/aquasecurity/trivy-operator/blob/main/deploy/helm/values.yaml#L517
@JAORMX could you please see if you can catch the output of node collector :
kubectl logs -n trivy-system node-collector-<id> before the pod is deleted
and let me know what value you get for kubeletAuthorizationModeArgumentSet , example :
"kubeletAuthorizationModeArgumentSet": {
"values": [
"Webhook"
]
}
@chen-keinan I don't see a node collector pod. Does it get claned up?
@chen-keinan I don't see a node collector pod. Does it get claned up?
yes , it need to catch it fast after job completed it get cleaned up