rules icon indicating copy to clipboard operation
rules copied to clipboard

BPF Program Not Profiled Rule with systemd

Open rriverak opened this issue 8 months ago • 9 comments

Hey, currently we are facing False-Positive Events from the BPF Program Not Profiled Rule. The Event was triggered by systemd, here is an example Log from Falco:

{"hostname":"node-4711","output":"11:28:29.271623114: Notice BPF Program Not Profiled (bpf_cmd=5 evt_type=bpf user=root user_uid=0 user_loginuid=-1 process=systemd proc_exepath=/usr/lib/systemd/systemd parent=<NA> command=systemd terminal=0 exe_flags=<NA> container_id=host container_image=<NA> container_image_tag=<NA> container_name=host k8s_ns=<NA> k8s_pod_name=<NA>)","output_fields":{"container.id":"host","container.image.repository":null,"container.image.tag":null,"container.name":"host","evt.arg.cmd":"5","evt.arg.flags":null,"evt.time":1746617309271623114,"evt.type":"bpf","k8s.ns.name":null,"k8s.pod.name":null,"proc.cmdline":"systemd","proc.exepath":"/usr/lib/systemd/systemd","proc.name":"systemd","proc.pname":null,"proc.tty":0,"user.loginuid":-1,"user.name":"root","user.uid":0},"priority":"Notice","rule":"BPF Program Not Profiled","source":"syscall","tags":["TA0003","container","host","maturity_sandbox","mitre_persistence"],"time":"2025-05-07T11:28:29.271623114Z"}

as you can see, the process and the proc.name is set to systemd in our Event.

But if we look at the rule BPF Program Not Profiled, especially the part bpf_profiled_binaries, we can see that systemd is already in the trusted binaries list.

We are now wondering why we still get this event for systemd and would be happy to get some advice here.

Falco: 0.40.0 Helm: 4.21.3 Kubernetes: 1.30.1 Ubuntu: 24.04

rriverak avatar May 07 '25 12:05 rriverak

I'm having the same issue. I'm not sure why this works fine, but I currently workaround by overriding the list with the same items.

- list: bpf_profiled_binaries
  override:
    items: replace
  items:
    - falco
    - bpftool
    - systemd

katsew avatar May 08 '25 03:05 katsew

Hey @katsew, thanks for this verry helpful hint!

I rechecked the logs of our Falco pods again and realized that we are actually using old rules!

{"level":"INFO","msg":"Resolving dependencies ...","timestamp":"2025-05-07 16:18:47"}
{"level":"INFO","msg":"Installing artifacts","refs":["ghcr.io/falcosecurity/rules/falco-rules:2","ghcr.io/falcosecurity/rules/falco-incubating-rules:2","ghcr.io/falcosecurity/rules/falco-sandbox-rules:2"],"timestamp":"2025-05-07 16:18:49"}
{"level":"INFO","msg":"Preparing to pull artifact","ref":"ghcr.io/falcosecurity/rules/falco-rules:2","timestamp":"2025-05-07 16:18:49"}
{"level":"INFO","msg":"Pulling layer 5edca1a8eea6","timestamp":"2025-05-07 16:18:50"}
{"level":"INFO","msg":"Pulling layer 48b6c5ae7a61","timestamp":"2025-05-07 16:18:50"}
{"level":"INFO","msg":"Pulling layer 8ac74658d3a4","timestamp":"2025-05-07 16:18:50"}
{"digest":"ghcr.io/falcosecurity/rules/falco-rules@sha256:8ac74658d3a4b3d4db6228db23b5706c1cf5e25f33c8eb33881e28f660a43828","level":"INFO","msg":"Verifying signature for artifact","timestamp":"2025-05-07 16:18:50"}
{"level":"INFO","msg":"Signature successfully verified!","timestamp":"2025-05-07 16:18:51"}
{"file":"falco_rules.yaml.tar.gz","level":"INFO","msg":"Extracting and installing artifact","timestamp":"2025-05-07 16:18:51","type":"rulesfile"}

If you can see in the Logs, we pull the Rules Image with tag 2 which is maybe outdated. at Tag 2, the rule was still in sandbox and had no systemd in the List. https://github.com/falcosecurity/rules/blob/97308654f2e43baca516f61d5b43c5cfc7eb6e10/rules/falco-sandbox_rules.yaml#L1695-L1696

systemd was added to the list with the start of ruleset tag 3 or version 3.2.0. https://github.com/falcosecurity/rules/blob/b6ad37371923b28d4db399cf11bd4817f923c286/rules/falco-incubating_rules.yaml#L1288-L1289

If I now look at the official helmet chart, version 3 of the Rules is to default. https://github.com/falcosecurity/charts/blob/6db1b396ae76741588208a437f8f7d44a2bee91e/charts/falco/values.yaml#L552-L553

So we will now update our rules to “3” and expect this solve the problem :)

rriverak avatar May 08 '25 07:05 rriverak

Hi @rriverak,

Thanks for the feedback! In my case, I tried to download falco rules with this falcoctl.yaml, expects the latest v3 rules(sha256:de2cd036fd7f9bb87de5d62b36d0f35ff4fa8afbeb9a41aa9624e5f6f9a004e1), but It seems that this refs download old ones.

    artifact:
      allowedTypes:
      - rulesfile
      - plugin
      follow:
        every: 6h
        falcoversions: http://localhost:8765/versions
        pluginsDir: /plugins
        refs:
        - falco-rules:3
        - falco-incubating-rules:3
        - falco-sandbox-rules:3
        rulesfilesDir: /rulesfiles
      install:
        pluginsDir: /plugins
        refs:
        - falco-rules:3
        - falco-incubating-rules:3
        - falco-sandbox-rules:3
        resolveDeps: true
        rulesfilesDir: /rulesfiles
    indexes:
    - name: falcosecurity
      url: https://falcosecurity.github.io/falcoctl/index.yaml

Alternatively I try dropping version number from refs which download the latest rule files, then it download the correct rule files you've mentioned.

    artifact:
      allowedTypes:
      - rulesfile
      - plugin
      follow:
        every: 6h
        falcoversions: http://localhost:8765/versions
        pluginsDir: /plugins
        refs:
        - falco-rules
        - falco-incubating-rules
        - falco-sandbox-rules
        rulesfilesDir: /rulesfiles
      install:
        pluginsDir: /plugins
        refs:
        - falco-rules
        - falco-incubating-rules
        - falco-sandbox-rules
        resolveDeps: true
        rulesfilesDir: /rulesfiles
    indexes:
    - name: falcosecurity
      url: https://falcosecurity.github.io/falcoctl/index.yaml

So the root cause of this issue is rule file versioning does not work as expected? 🤔

katsew avatar May 08 '25 11:05 katsew

So the root cause of this issue is rule file versioning does not work as expected? 🤔

Yes, i also believe that we get different rule "versions" here than expected. I would expect sematic versioning for the Images, but the rule image tags do not correspond to Semver.

Image

for some reason we have several tags for version 3 in different spellings. Looks like the image tagging here was messed up... 😄

rriverak avatar May 08 '25 11:05 rriverak

i found the rule without systemd in

oras pull ghcr.io/falcosecurity/rules/falco-sandbox-rules:3 --output ./falco-sandbox-rules-3

and found the fixed rule in

oras pull ghcr.io/falcosecurity/rules/falco-incubating-rules:4 --output ./falco-incubating-rules-4

but the oci-image for the main rules ghcr.io/falcosecurity/rules/falco-rules:4 is missing. so you probably have to ref without a tag to get all the rules (mixed tags) up to date

rriverak avatar May 08 '25 20:05 rriverak

As it turns out, the download seems to be working as expected, although it is a bit confusing since falco-rules, falco-incubating-rules, and falco-sandbox-rules are updated independently of each other. If we need the same rules that are in the main branch, it looks like we need to download using the latest tag or look at the latest tag released for each rule. In other words, this is not a bug and it looks like it can be closed.

katsew avatar May 09 '25 05:05 katsew

I'm facing the same issue when using the latest release, but with /usr/bin/runc and containerd-shim instead of systemd. Falco: 0.41.0 Ubuntu: 22.04 Kernel: 5.15.0-1088 AKS: 1.32.4

falco-rules: 4.0.0 falco-incubating-rules: 5.0.0

{
      "uuid": "7605c0ca-6ecf-4292-aba2-e0fd91eaac31",
      "output": "2025-06-11T10:21:59.737823567+0000: Notice BPF Program Not Profiled | bpf_cmd=5 evt_type=bpf user=root user_uid=0 user_loginuid=-1 process=runc proc_exepath=/usr/bin/runc parent=containerd-shim command=runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014/log.json --log-format json --systemd-cgroup create --bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014 --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014/init.pid 06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014 terminal=0 container_id=host container_name=host container_image_repository= container_image_tag= k8s_pod_name=<NA> k8s_ns_name=<NA>",
      "priority": "Notice",
      "rule": "BPF Program Not Profiled",
      "time": "2025-06-11T10:21:59.737823567Z",
      "source": "syscall",
      "output_fields": {
        "container.id": "host",
        "container.image.repository": "",
        "container.image.tag": "",
        "container.name": "host",
        "evt.arg.cmd": "5",
        "evt.time.iso8601": 1749637319737823500,
        "evt.type": "bpf",
        "k8s.ns.name": null,
        "k8s.pod.name": null,
        "proc.cmdline": "runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014/log.json --log-format json --systemd-cgroup create --bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014 --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014/init.pid 06aab6210e5d9572ed734b52765ce6dbbec4bfb93a2f4f4149fc79570a03a014",
        "proc.exepath": "/usr/bin/runc",
        "proc.name": "runc",
        "proc.pname": "containerd-shim",
        "proc.tty": 0,
        "user.loginuid": -1,
        "user.name": "root",
        "user.uid": 0
      },
      "hostname": "aks-default-28733949-vmss0000oi",
      "tags": [
        "",
        "TA0003",
        "container",
        "host",
        "maturity_incubating",
        "mitre_persistence"
      ]
    }

Does it make sense to add it to the list of allowed binaries in bpf_profiled_binaries

tberreis avatar Jun 11 '25 11:06 tberreis

Any updates regarding this issue?

schoen2 avatar Jul 30 '25 12:07 schoen2

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.

Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle stale

poiana avatar Oct 28 '25 16:10 poiana

Stale issues rot after 30d of inactivity.

Mark the issue as fresh with /remove-lifecycle rotten.

Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Provide feedback via https://github.com/falcosecurity/community.

/lifecycle rotten

poiana avatar Nov 27 '25 16:11 poiana