falco
falco copied to clipboard
Cannot use k8s.ns.name filtercheck - unknown filtercheck field k8s.ns.name
Hi,
I'm trying to do some exercises with Falco. The scenario is: if a configmap is deleted in test-ns namespace Falco notifies. I changed the "Create/Modify Configmap With Private Credentials" rule like this:
- rule: Create/Modify Configmap With Private Credentials
desc: >
Detect creating/modifying a configmap containing a private credential (aws key, password, etc.)
condition: kevt and configmap and kmodify and contains_private_credentials and k8s.ns.name=test-ns
output: Kubernetes configmap with private credential (namespace=%k8s.ns.name, user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)
priority: WARNING
source: k8s_audit
tags: [k8s]
When i restart Falco it gives "unknown filtercheck field k8s.ns.name" error:
Jan 15 15:57:05 argela falco[3338]: Sat Jan 15 15:57:05 2022: Runtime error: Could not load rules file /etc/falco/k8s_audit_rules.yaml: 1 errors:
Jan 15 15:57:05 argela falco[3338]: Invalid output format 'Kubernetes configmap with private credential (namespace=%k8s.ns.name, user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)': 'Could not parse format string "Kubernetes configmap with private credential (namespace=%k8s.ns.name, user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)": unknown filtercheck field k8s.ns.name, user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)'
argela@argela:~$ falco --version Falco version: 0.30.0 Driver version: 3aa7a83bf7b9e6229a3824e3fd1f4452d1e95cb4
argela@argela:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic
regards,
yavuz
@yvzsrt: There is not a label identifying the kind of this issue. Please specify it either using
/kind <group>
or manually from the side menu.
No apply label icon is visible for me? Is it my fault?
Hi,
I'm experiencing a similar issue and i assume that k8s_audit events not automaticly return a k8s. object within falco. Is there a way to lookup metadata through falco by using for example ka.target.namespace to do a metadata lookup like syslog events do?
Hey @yvzsrt, did you try running Falco with the -K
option? the k8s.*
field class is available when the K8S API server metadata fetching is active.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
Hey @yvzsrt, did you try running Falco with the
-K
option? thek8s.*
field class is available when the K8S API server metadata fetching is active.
@jasondellaluce, How to use the -K flag? I didn't find it on the documentation.
Here's an example oh how we use it in our Kubernetes deployment templates: https://github.com/falcosecurity/deploy-kubernetes/blob/f5b6e71473f8a66f3ab33c0163ab73d2c18441cb/kubernetes/falco/templates/daemonset.yaml#L51.
You can also find out the official option dove by running falco -h
.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Provide feedback via https://github.com/falcosecurity/community. /close
@poiana: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with
/reopen
.Mark the issue as fresh with
/remove-lifecycle rotten
.Provide feedback via https://github.com/falcosecurity/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.