Getting "/<NA>" for fd.name with Falco 0.32.1
Describe the bug
fd.name contains /<NA> when a bash shell is spawned and the Write below root rule is triggered.
I don't expect <NA>, and I definitely do not expect /<NA>.
I assume it is the opening of /dev/tty, but I am not certain:
# strace -eopenat bash 2>&1 | grep -E 'WRON|RDWR'
openat(AT_FDCWD, "/dev/tty", O_RDWR|O_NONBLOCK) = 3
(That's the only WRONLY or RDWR opening I see when starting bash.)
How to reproduce it
Log the json somewhere using jsonOutput: true.
Start a bash shell with a TTY (kubectl exec -it falco-2s84s).
Check the json output:
{
"output": "14:05:22.494635348: Error File below / or /root opened for writing (user=root user_loginuid=-1 command=bash parent=bash file=/<NA> program=bash container_id=c8a1e44e20da image=falcosecurity/falco) k8s.ns=kube-falco k8s.pod=falco-2s84s container=c8a1e44e20da",
"priority": "Error",
"rule": "Write below root",
"source": "syscall",
"tags": [
"filesystem",
"mitre_persistence"
],
"time": "2022-07-12T12:05:22.494635348Z",
"output_fields": {
"container.id": "c8a1e44e20da",
"container.image.repository": "falcosecurity/falco",
"evt.time": 1657627522494635300,
"fd.name": "/<NA>",
"k8s.ns.name": "kube-falco",
"k8s.pod.name": "falco-2s84s",
"proc.cmdline": "bash",
"proc.name": "bash",
"proc.pname": "bash",
"user.loginuid": -1,
"user.name": "root"
}
}
Observe how fd.name contains both a / and a <NA>.
Expected behaviour
fd.name = "/dev/tty" (I think)
Environment
Running helm chart 1.19.4, but changed the Falco image in the daemonset from 0.32.0 to 0.32.1.
# falco --version
Falco version: 0.32.1
Libs version: 0.7.0
Plugin API: 1.0.0
Driver:
API version: 1.0.0
Schema version: 2.0.0
Default driver: 2.0.0+driver
# falco --support | jq .system_info
Tue Jul 12 14:21:59 2022: Falco version 0.32.1
Tue Jul 12 14:21:59 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Tue Jul 12 14:21:59 2022: Configured rules filenames:
Tue Jul 12 14:21:59 2022: /etc/falco/falco_rules.yaml
Tue Jul 12 14:21:59 2022: /etc/falco/falco_rules.local.yaml
Tue Jul 12 14:21:59 2022: /etc/falco/rules.d
Tue Jul 12 14:21:59 2022: Loading rules from file /etc/falco/falco_rules.yaml:
Tue Jul 12 14:21:59 2022: Loading rules from file /etc/falco/falco_rules.local.yaml:
Tue Jul 12 14:21:59 2022: Loading rules from file /etc/falco/rules.d/xxx.yaml:
Tue Jul 12 14:21:59 2022: Loading rules from file /etc/falco/rules.d/xxx.yaml:
Tue Jul 12 14:22:00 2022: Loading rules from file /etc/falco/rules.d/xxx.yaml:
{
"machine": "x86_64",
"nodename": "xxx",
"release": "5.4.0-100-generic",
"sysname": "Linux",
"version": "#113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022"
}
Hi, @wdoekes thank you for reporting this! You have written but changed the Falco image in the daemonset from 0.32.0 to 0.32.1., does it mean that Falco 0.32.0 works correctly without issues?
Haha. No.
Sorry that I wasn't clear on that.
I did a lot of testing with 0.32.0 and I had hoped that the bulk of the <NA> was gone after #2048 which should be in 0.32.1. I wasn't that interested in %user, so I haven't checked whether that is gone/fixed now.
Checking now, I see at least one occurrence of user=<NA>, but those are indeed from a pod that is missing its UID in /etc/passwd:
$ sudo kubectl -n kube-falco logs falco-nj4cg |
sed -ne '/.*version=.*/p;/<NA>/s/.*\("rule":"[^"]*"\).*\("[a-z.]*":"[^"]*<NA>"\).*/\1,...,\2/p'
* Running falco-driver-loader for: falco version=0.32.1, driver version=2.0.0+driver
"rule":"Shell process without TTY started",...,"proc.pname":"<NA>"
"rule":"Shell process without TTY started",...,"user.name":"<NA>"
"rule":"Shell process with TTY started",...,"user.name":"<NA>"
"rule":"Write below root",...,"user.name":"<NA>"
"rule":"Write below root",...,"fd.name":"/<NA>"
So while the %user problem is likely gone, the %fd.name problem is not.
I can reproduce this. It looks like the bug has been introduced in 0.31.1 and seems to only affect the eBPF probe https://github.com/falcosecurity/libs/issues/477 , opened the relevant issue for libs.
Thank you @LucaGuerra and @wdoekes I'll take a look
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://github.com/falcosecurity/community. /close
@poiana: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with
/reopen.Mark the issue as fresh with
/remove-lifecycle rotten.Provide feedback via https://github.com/falcosecurity/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.