checkov
checkov copied to clipboard
CKV_K8S_21 - ability to pass namespace for Helm managed templates
Describe the issue
CKV_K8S_21
The check does report that the default namespace is used when in fact Helm install uses a custom namespace when deploying to k8s.
Examples
Check: CKV_K8S_21: "The default namespace should not be used"
FAILED for resource: ServiceAccount.default.release-name-hello-kubernetes
File: /hello-kubernetes/templates/sa.yaml:3-13
Guide: https://docs.bridgecrew.io/docs/bc_k8s_20
3 | apiVersion: v1
4 | kind: ServiceAccount
5 | metadata:
6 | name: release-name-hello-kubernetes
7 | namespace: default
8 | labels:
9 | app.kubernetes.io/name: hello-kubernetes
10 | helm.sh/chart: hello-kubernetes-1.0.24
11 | app.kubernetes.io/instance: release-name
12 | app.kubernetes.io/managed-by: Helm
13 | app.kubernetes.io/version: "1.4"
As per Helm best practices here https://github.com/helm/helm/issues/5465#issuecomment-473942223 it's not recommended to hardcode the namespace as helm uses the parameter -namespace at installation time which namespace to deploy to.
So the below template for ServiceAccount, from helm perspective is completely valid.
cat sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "hello-kubernetes.fullname" . }}
labels: {{- include "hello-kubernetes.labels" . | nindent 4 }}
Version (please complete the following information):
- version: 2.2.80
Additional context
Even if I add namespace to metadata, it still reports as FAILED.
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "hello-kubernetes.fullname" . }}
namespace: {{ .Release.Namespace }}
labels: {{- include "hello-kubernetes.labels" . | nindent 4 }}
I'm running checkov in container in Jenkins like described here https://www.checkov.io/4.Integrations/Jenkins.html.
stage('Static code analysis') {
agent {
docker {
image 'bridgecrew/checkov:2.2.80'
args "--entrypoint=''"
}
}
steps {
sh "checkov -d ${env.DIRECTORY} --framework helm"
}
}
My k8s default context has no namespace defined and probably that's why being picked up as default.
I'm wondering if there is a way to tell checkov what namespace to infer when calling helm ?
I couldn't find documented here https://www.checkov.io/7.Scan%20Examples/Helm.html how to do this.
setting $HELM_NAMESPACE environment variable to custom namespace doesn't help. It still reports for default.
Any update on this? This is causing us issues when we try to use runners that are not on openshift
I ran into this issue, though in a slightly different context than the original question. I'm not using Jenkins. In my case, I took inspiration from the section on third-party Helm charts to work around the issue.
I had to first render the Helm template using a non-default namespace. Then scan those rendered templates with checkov for static analysis.
helm template $DIRECTORY --namespace not-default --output-dir /tmp/helm-template
checkov -d /tmp/helm-template/${YOUR_CHART_NAME}
I could not find any way to pass the --namespace option through to checkov autodetecting Helm, as was being done in Jenkins in the original question:
// ...
steps {
sh "checkov -d ${env.DIRECTORY} --framework helm"
}
// ...
~It does feel frustrating that the $HELM_NAMESPACE environment variable is not being honoured, as described by @Constantin07.~ (see follow-up comment)
If it's just for pipeline static analysis, maybe you could follow a similar workaround of first rendering the charts to a temporary location? Otherwise, as described in the section on third-party Helm charts there's always the option of --skip-check CKV_K8S_21.
Note we skip check
CKV_K8S_21for this process, which alerts on default namespace usage within Kubernetes manifests. Since helm manages our namespaces, we always skip this internally when using the helm framework, so we want to replicate the same behaviour here.
I realize "just skip it" is antithetical to asking a question though 🙃
Actually, I dug into the HELM_NAMESPACE environment variable a bit further, and it does get evaluated properly when checkov autodetects Helm. (FYI, @Constantin07). For what it's worth, I'm using checkov version 2.3.66. This can be shown with something like:
export HELM_NAMESPACE=not-default
checkov -d $DIRECTORY --framework helm -c CKV_K8S_21
So, to fix this in a Jenkins pipeline, as originally asked, it aught to be something like:
stage('Static code analysis') {
environment {
HELM_NAMESPACE = 'not-default'
}
agent {
docker {
image 'bridgecrew/checkov:2.2.80'
args "--entrypoint=''"
}
}
steps {
sh "checkov -d ${env.DIRECTORY} --framework helm"
}
}
See https://www.jenkins.io/doc/book/pipeline/jenkinsfile/#setting-environment-variables for setting environment variables in a pipeline. Maybe it might take a bit more effort to pass the environment variable down through to a Docker container, as in the original question. Maybe something like:
// ...
agent {
docker {
image 'bridgecrew/checkov:2.2.80'
args "--entrypoint='' --env HELM_NAMESPACE='not-default'"
}
}
// ...
I am also relying on Helm to manage the namespace, and using checkov as a GH action results in a failing pipeline due to CKV_K8S_21.
Hardcoding namespace: {{ .Release.Namespace }} in the templates doesn't solve the issue, and as already pointed out is not a best practice.
Even setting the env HELM_NAMESPACE='not-default' in the checkov's step still fails the check.
I think that skipping the check is just a workaround, and not a solution. Maybe consider adding a namespace flag that tells to checkov that the resources will be deployed on that specific namespace, even though helm doesn't render the metadata.namespace when called like helm template --namespace not-default ...
Thanks @reidmiller-geotab for suggestion, I've tried but unfortunately it still fails even with HELM_NAMESPACE env var:
[Pipeline] {
[Pipeline] sh
+ export HELM_NAMESPACE=system
+ checkov -d pipelines/kubernetes/hello-kubernetes/deploy --framework helm --config-file .checkov.yaml --var-file pipelines/kubernetes/hello-kubernetes/deploy/values.yaml -o cli -o junitxml --output-file-path console,results.xml --soft-fail --quiet
helm scan results:
Passed checks: 85, Failed checks: 6, Skipped checks: 3
Check: CKV_K8S_21: "The default namespace should not be used"
FAILED for resource: ServiceAccount.default.release-name-hello-kubernetes
File: /hello-kubernetes/templates/sa.yaml:3-12
Guide: https://docs.bridgecrew.io/docs/bc_k8s_20
3 | apiVersion: v1
4 | kind: ServiceAccount
5 | metadata:
6 | name: release-name-hello-kubernetes
By bridgecrew.io | version: 2.3.199
Also it tries to connect to K8s cluster which is annoying since Helm chart is located on local file system:
[Pipeline] sh
+ export HELM_NAMESPACE=not-default
+ checkov -d pipelines/kubernetes/hello-kubernetes/deploy --framework helm --skip-framework kubernetes --config-file .checkov.yaml --var-file pipelines/kubernetes/hello-kubernetes/deploy/values.yaml -o cli -o junitxml --output-file-path console,results.xml --soft-fail --quiet
2023-04-26 21:00:33,345 [MainThread ] [WARNI] Error processing helm dependancies for hello-kubernetes at source dir: pipelines/kubernetes/hello-kubernetes/deploy/hello-kubernetes. Working dir: /tmp/tmpyrv2or65. Error details: W0426 21:00:33.329716 36 loader.go:222] Config not found: /home/toolbox/.kube/config
2023-04-26 21:00:33,413 [MainThread ] [WARNI] Error processing helm chart hello-kubernetes at dir: pipelines/kubernetes/hello-kubernetes/deploy/hello-kubernetes. Working dir: /tmp/tmpyrv2or65. Error details: W0426 21:00:33.393110 40 loader.go:222] Config not found: /home/toolbox/.kube/config
W0426 21:00:33.395182 40 loader.go:222] Config not found: /home/toolbox/.kube/config
I found another place, where this rule leads to problems. We add a random namespace to the helm charts just to be sure, that no default namespace is used by accident. For deployment we are using ArgoCD. But ArgoCD will not override namespaces set by namespace attribute. Our only solution is, to disable this rule so far.
Thanks for contributing to Checkov! We've automatically marked this issue as stale to keep our issues list tidy, because it has not had any activity for 6 months. It will be closed in 14 days if no further activity occurs. Commenting on this issue will remove the stale tag. If you want to talk through the issue or help us understand the priority and context, feel free to add a comment or join us in the Checkov slack channel at codifiedsecurity.slack.com Thanks!
remove /stale