falco
falco copied to clipboard
falco 0.31, getting <NA> output for %container.name (on K3S)
Describe the bug
The output for %container.name is <NA>
How to reproduce it
This is the related rule
- rule: Terminal shell in container
desc: A shell was used as the entrypoint/exec point into a container with an attached terminal.
condition: >
spawned_process and container
and shell_procs and proc.tty != 0
and container_entrypoint
and not user_expected_terminal_shell_in_container_conditions
output: "[%evt.time][%container.id] [%container.name]"
priority: NOTICE
tags: [container, shell, mitre_execution]
Expected behaviour
Container name is logged.
Screenshots
I am getting this in the log 07:52:32.715505210: Notice [07:52:32.715505210][4835ebe3f685] [<NA>]
Environment
- Falco version:
Falco version: 0.31.0 Driver version: 319368f1ad778691164d33d59945e00c5752cd27
- System info:
Wed Feb 23 15:58:27 2022: Falco version 0.31.0 (driver version 319368f1ad778691164d33d59945e00c5752cd27)
Wed Feb 23 15:58:27 2022: Falco initialized with configuration file /etc/falco/falco.yaml
Wed Feb 23 15:58:27 2022: Loading rules from file /etc/falco/falco_rules.yaml:
Wed Feb 23 15:58:28 2022: Loading rules from file /etc/falco/falco_rules.local.yaml:
Wed Feb 23 15:58:28 2022: Loading rules from file /etc/falco/k8s_audit_rules.yaml:
Wed Feb 23 15:58:28 2022: Loading rules from file /etc/falco/rules.d/local.yaml:
{
"machine": "x86_64",
"nodename": "[REDACTED]",
"release": "5.16.7-200.fc35.x86_64",
"sysname": "Linux",
"version": "#1 SMP PREEMPT Sun Feb 6 19:53:54 UTC 2022"
}
- Cloud provider or hardware configuration:
- OS: Fedora 35, Falco is using BPF not kernel module
- Kernel:
- Installation method:
RPM
Additional context
It is running K3S (rootfull).
Hi @patrickdung!
My two cents: you might need to enable and define the path to the CRI socket. In some k3s installations containerd is used, so that might be the problem. I can't see in your description which container runtime is used in your cluster, so you might want to try this before further debugging.
If you are deploying Falco with helm you can add this options:
--set containerd.enabled=true \
--set containerd.socket=/run/k3s/containerd/containerd.sock \
Hope this helps.
Yes, you are right.
I install Falco on a host. Use of --cri /run/k3s/containerd/containerd.sock solved the problem. The name of the container can now be shown:
16:10:51.023333426: Notice [16:10:51.023333426][9dc5e707f717] [goatcounter]
I saw that it is mentioned in the deployment. Would it be good to mention about it for K3S (and other CRI implementation) in the installation doc?
Thanks for the help.
Good to hear it solved the issue for you! I agree 100%, I believe this can helps others. Would you like to open a PR with a brief note at the end of the docs section you linked?
I am not an expert but I believe most k3s clusters are using containerd, so it is likely that more people will find it useful.
Sure, will submit a PR about it soon.
I would like to test with Falco with Helm (on k3s) and the options you provided first. Then include it in the PR too.
While I am testing Falco with k3s with helm, there are new problems:
Falco could get the namespace/pod name of loghorn (it's in another NS).
19:11:39.900830561: Notice Privileged container started (user=<NA> user_loginuid=0 command=container:1b919c1e6105 k8s.ns=longhorn-system k8s.pod=longhorn-manager-97pz9 container=1b919c1e6105 image=docker.io/longhornio/longhorn-manager:v1.2.3) k8s.ns=longhorn-system k8s.pod=longhorn-manager-97pz9 container=1b919c1e6105 k8s.ns=longhorn-system k8s.pod=longhorn-manager-97pz9 container=1b919c1e6105
But it could not get the container name of my custom rule, also the k8s.ns and k8s.pod is also NA.
19:12:01.836070898: Notice [19:12:01.836070898][9dc5e707f717] [<NA>] k8s.ns=<NA> k8s.pod=<NA> container=9dc5e707f717
OK, I had created the PR because the location 'containerd.socket' has to be updated for K3S. The problem in https://github.com/falcosecurity/falco/issues/1911#issuecomment-1050206366 is another issue.
Also, on second thought, the PR is to update the 'Deployment' section instead of 'Install' section.
hi Patrick,
thanks for the update and porposal to improve the docs.
it could not get the container name of my custom rule
I do not know why this data fields are not displayed just for custom rules. Can you share your custom rule here?
@pabloopez
It is basically the same as the original post that is used in the host, but it's in yaml format to be used when install by Helm. Here it is:
$ cat custom-rules.yaml
customRules:
rules-local.yaml: |-
- rule: Terminal shell in container
desc: A shell was used as the entrypoint/exec point into a container with an attached terminal.
condition: >
spawned_process and container
and shell_procs and proc.tty != 0
and container_entrypoint
and not user_expected_terminal_shell_in_container_conditions
output: "[%evt.time][%container.id] [%container.name]"
priority: NOTICE
tags: [container, shell, mitre_execution]
Then install by Helm:
helm upgrade --install falco -f custom-rules.yaml falcosecurity/falco --set containerd.enabled=true --set containerd.socket=/run/k3s/containerd/containerd.sock --set docker.enabled=false --debug
your rule LGTM.
I do not know why k8s & container metadata is not displayed in particular for your custom rule. Are you sure that the event that triggers the rule is happening in a container running in a k8s pod? This can explain the <NA> value for k8s metadata, but not the missing value for container.name.
Yes, it is the same pod (a container named goatcounter), it is inside K3S.
In this test when Falco is installed on host:
16:10:51.023333426: Notice [16:10:51.023333426][9dc5e707f717] [goatcounter]
The container id is 9dc5e707f717.
Then I stop the Falco on the host and install Falco on the same K3S with Helm. Then I exit the bash in the pod and trigger it again using the same command. The result is
19:12:01.836070898: Notice [19:12:01.836070898][9dc5e707f717] [<NA>] k8s.ns=<NA> k8s.pod=<NA> container=9dc5e707f717
The container id is the same. I also test with Falco on K3S (Helm) with eBPF/kernel module, the result is the same.
I am also having the same problem (I have container ID but not all the other k8s metadata) in AKS. I wonder if you tested passing the --disable-cri-async flag to falco.
@alfredomagallon Yes, I also test with --disable-cri-async flag with eBPF, the result is the same.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Hi, I would like to confirm whether the issue I'm encountering is the same / similar to this thread. If not, I will submit a new one.
kubectl logs falco-lg6n6 -n falco
08:34:14.441303951: Error File below /etc opened for writing (user=<NA> user_loginuid=-1 command=cp /etc/resolv.conf /etc/resolv2.conf pid=2204126 parent=bash pcmdline=bash file=/etc/resolv2.conf program=cp gparent=<NA> ggparent=<NA> gggparent=<NA> container_id=fe76dbf2595c image=<NA>) k8s.ns=default k8s.pod=ubuntu container=fe76dbf2595c
The container information (image) is not available (<NA>).
My environment is in K3s. Installation:
helm install falco falcosecurity/falco \
--set driver.kind=ebpf \
--namespace falco \
--create-namespace \
--set containerd.enabled=true \
--set containerd.socket=/run/k3s/containerd/containerd.sock
The issue here with Rancher Kubernetes 2 (RKE2).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Hi, have you ever tried the latest Falco version https://github.com/falcosecurity/falco/releases/tag/0.35.1? does the issue persist?
Any update?
Retest with newest falco and k3s Now I got this. The problem is resolved. Thanks.
Aug 31 14:25:44 home falco[430526]: 14:25:44.540774197: Notice [14:25:44.540774197][721c2aa6333c] [recon]
Name: recon-0
Namespace: recon
.....
Status: Running
IP: 10.42.0.188
IPs:
IP: 10.42.0.188
Controlled By: StatefulSet/recon
Containers:
recon:
Container ID: containerd://721c2aa6333c65f8ff072678897b9276cc6dd49ab03acf4ad4cb5a44ee4984b5
thank you for the feedback, I will close this, if there other issues feel free to re-open this or another ticket