oci-seccomp-bpf-hook
oci-seccomp-bpf-hook copied to clipboard
hook should detect when running in a rootless context and give a meaningful error
# podman run --rm --log-level=info --hooks-dir /usr/share/containers/oci/hooks.d --security-opt label=disable --annotation io.containers.trace-syscall='of:/tmp/foo.json' -it bash sh -c 'ls -al'
INFO[0000] podman filtering at log level info
INFO[0000] Found CNI network podman (type=bridge) at /home/bernhard/.config/cni/net.d/87-podman.conflist
INFO[0000] Setting parallel job count to 25
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-ff907b2f6f7c482f5a49a6823c9fbf468b7f3041f3e48db2aad9ca0bfeacb16c.scope
Error: OCI runtime error: error executing hook `/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook` (exit code: 1)
Note that the last lane is delayed by roughly 5~10 seconds
# /usr/share/containers/oci/hooks.d/oci-seccomp-bpf-hook.json
{
"version": "1.0.0",
"hook": {
"path": "/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook",
"args": [
"oci-seccomp-bpf-hook",
"-s"
]
},
"when": {
"annotations": {
"^io\\.containers\\.trace-syscall$": ".*"
}
},
"stages": [
"prestart"
]
}
Thanks for reaching out!
Can you share the output of podman info and which version of the hook you are using? Usually such hiccups are kernel bugs but I can't be sure. Does journalctl reveal some hints on why the hook has failed?
# dnf provides (snipped)
oci-seccomp-bpf-hook-1.2.6-1.fc35.x86_64 : OCI Hook to generate seccomp json files based on EBF syscalls used by container
Repo : @System
Matched from:
Filename : /usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook
#/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook --version
1.2.6
#podman info
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- cpu
- io
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.0-2.fc35.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.0, commit: '
cpus: 8
distribution:
distribution: fedora
variant: workstation
version: "35"
eventLogger: journald
hostname: jurassicpark
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.19.6-100.fc35.x86_64
linkmode: dynamic
logDriver: journald
memFree: 18707378176
memTotal: 33499586560
ociRuntime:
name: crun
package: crun-1.5-1.fc35.x86_64
path: /usr/bin/crun
version: |-
crun version 1.5
commit: 54ebb8ca8bf7e6ddae2eb919f5b82d1d96863dea
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.12-2.fc35.x86_64
version: |-
slirp4netns version 1.1.12
commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 8589930496
swapTotal: 8589930496
uptime: 29h 49m 1.28s (Approximately 1.21 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /home/bernhard/.config/containers/storage.conf
containerStore:
number: 18
paused: 0
running: 0
stopped: 18
graphDriverName: btrfs
graphOptions: {}
graphRoot: /home/bernhard/.local/share/containers/storage
graphStatus:
Build Version: 'Btrfs v5.18 '
Library Version: "102"
imageStore:
number: 5
runRoot: /run/user/1000/containers
volumePath: /home/bernhard/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.7
Built: 1657492525
BuiltTime: Mon Jul 11 00:35:25 2022
GitCommit: ""
GoVersion: go1.16.15
OsArch: linux/amd64
Version: 3.4.7
Linux jurassicpark 5.19.6-100.fc35.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Aug 31 18:58:02 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
~journalctl doesn't show anything relevant, no unknown hiccups.~
Sep 07 16:47:35 jurassicpark oci-seccomp-bpf-hook[342720]: time="2022-09-07T16:47:35+02:00" level=fatal msg="BPF program didn't compile and attach within 10 seconds: please refer to the syslog (e.g., journalctl(1)) for more details"
Sep 07 16:48:52 jurassicpark oci-seccomp-bpf-hook[342901]: time="2022-09-07T16:48:52+02:00" level=info msg="Started OCI seccomp hook version 1.2.6"
Sep 07 16:48:52 jurassicpark oci-seccomp-bpf-hook[342901]: time="2022-09-07T16:48:52+02:00" level=info msg="Trying to load `kheaders` module"
Sep 07 16:48:52 jurassicpark oci-seccomp-bpf-hook[342909]: time="2022-09-07T16:48:52+02:00" level=info msg="Running floating process PID to attach: 342894"
Sep 07 16:48:53 jurassicpark oci-seccomp-bpf-hook[342909]: time="2022-09-07T16:48:53+02:00" level=info msg="Loading enter tracepoint"
Sep 07 16:48:53 jurassicpark audit[342909]: ANOM_ABEND auid=1000 uid=1000 gid=1000 ses=15 subj=unconfined_u:system_r:container_runtime_t:s0 pid=342909 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1
Sep 07 16:48:53 jurassicpark audit: BPF prog-id=3337 op=LOAD
Sep 07 16:48:53 jurassicpark audit: BPF prog-id=3338 op=LOAD
Sep 07 16:48:53 jurassicpark audit: BPF prog-id=3339 op=LOAD
Sep 07 16:48:53 jurassicpark systemd[1]: Started Process Core Dump (PID 342915/UID 0).
Sep 07 16:48:53 jurassicpark audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@33-342915-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 07 16:48:54 jurassicpark systemd-coredump[342916]: Process 342909 (oci-seccomp-bpf) of user 1000 dumped core.
Found module linux-vdso.so.1 with build-id: 54e66d3ec758fb2d64398a9689febee1c0176b48
Found module libpcre2-8.so.0 with build-id: a4b6bda666ec4af5ca0ef12312d90e046979d91c
Found module libcrypt.so.2 with build-id: 52fd1fd3dc3c8c13f94214c6f2d6fda0832db3d5
Found module libselinux.so.1 with build-id: f805394f993c704b949315b56c344d22dfad801f
Found module libbrotlicommon.so.1 with build-id: 9e92a8ec2d9efe80ec86e04fba0549cd3202ebc4
Found module libsasl2.so.3 with build-id: a962c31929ae88a69c3877a7666555f5610eabf4
Found module libresolv.so.2 with build-id: 5907bee1c2667981df1ec82dc897c564ab89e559
Found module libkeyutils.so.1 with build-id: 2560a16099ad1875f7ea2195ae25b97ea168a758
Found module libkrb5support.so.0 with build-id: f7452852d4ffd2af61a4828b8a7ef7588507a529
Found module libunistring.so.2 with build-id: edcae9db236efd11e61995054ddf27a89ec6dc40
Found module libbrotlidec.so.1 with build-id: 57cb8a53e33f31a620f3739919414c0d00549f87
Found module liblber-2.4.so.2 with build-id: e7a8b3a354a19afa3f7261c85c4bb30af11f8126
Found module libldap_r-2.4.so.2 with build-id: 1406008a1002c2265ac3a7e540a5ed4ad278579c
Found module libcom_err.so.2 with build-id: a1d791cd7600f5609702a895a64d9131d1cd7b8f
Found module libk5crypto.so.3 with build-id: 84fe80b22acdfc7ec8c5be7b89af4da9702f59eb
Found module libkrb5.so.3 with build-id: f002f427c787f717d0770d28f0355759b13021e5
Found module libgssapi_krb5.so.2 with build-id: f85569d666762f80fdbe92031a75330fd18b0809
Found module libcrypto.so.1.1 with build-id: 826018d0fadd8204b482f6cf2e192720a594bcff
Found module libssl.so.1.1 with build-id: 908866f88b06bfa3b935e976f00b30362f7df5fc
Found module libpsl.so.5 with build-id: 71614cc984977692f16edc6189df04f570f51608
Found module libssh.so.4 with build-id: 1e466b1f1c44646e8ef4279b82980df32fa03261
Found module libidn2.so.0 with build-id: ad8fb49318637df6927fc666d7ca0306cc1476aa
Found module libnghttp2.so.14 with build-id: b6492c1dabf77777b1ae631416297ffc01f7ff30
Found module libcurl.so.4 with build-id: 1944899b62dc510aa7bd9aba0d70ba52118f96da
Found module libtinfo.so.6 with build-id: 06c834f68d17417916bfc0350d75b727f8297f22
Found module libz.so.1 with build-id: 96f8194eeb0585da4b417dff6583a419b7cb756f
Found module libedit.so.0 with build-id: 9549e0628b667d19e647c9021983e041f921d571
Found module libffi.so.6 with build-id: 0129f72b58e11d59546f7f207d2c90af7ebd9a5e
Found module libm.so.6 with build-id: 8a11fe1b0b919d2fd28cb25b8bdfb5c05ff6aba7
Found module ld-linux-x86-64.so.2 with build-id: cd4630178d881d9ffc6cf7452eadee7ded8c6a64
Found module libgcc_s.so.1 with build-id: b6870691657424bce223a2e63e40a74a86865cd4
Found module libstdc++.so.6 with build-id: 36f7946c2608f3e08b11ec0af0b63055a39c89f3
Found module libbpf.so.0 with build-id: c5a452bc5e24ef259fa78b4a006cc2c04dedda2d
Found module libdebuginfod.so.1 with build-id: 2b6e46f97b35d9acc9411c2b5cd17e6b1c1c022b
Found module libelf.so.1 with build-id: c00270dc851a590f453de5ef73a4fd3fa0d37629
Found module libLLVM-12.so with build-id: 92f8b8aafcefe8f25768c37e16635b8d23e363a5
Found module libclang-cpp.so.12 with build-id: f446e1c58d13db63cde896ccdda9220267d6a447
Found module libc.so.6 with build-id: 0db628361c1a2930c4914cc759c1aaffb58457d0
Found module libseccomp.so.2 with build-id: d9e37e30e12bcd8de6d550fda9c874c8870067d9
Found module libbcc.so.0 with build-id: aa75bbf4bc50a760cb4dfd337a46a2603ddf5b24
Found module oci-seccomp-bpf-hook with build-id: a177d6712413c71b4aba687d64270407d2886ff5
Stack trace of thread 342909:
#0 0x000055f41829c661 runtime.raise (oci-seccomp-bpf-hook + 0x10f661)
#1 0x000055f41827c691 runtime.sigfwdgo (oci-seccomp-bpf-hook + 0xef691)
#2 0x000055f41827ae94 runtime.sigtrampgo (oci-seccomp-bpf-hook + 0xede94)
#3 0x000055f41829c9c3 runtime.sigtramp (oci-seccomp-bpf-hook + 0x10f9c3)
#4 0x00007f3483fd0dc0 __restore_rt (libc.so.6 + 0x54dc0)
#5 0x000055f41829c661 runtime.raise (oci-seccomp-bpf-hook + 0x10f661)
#6 0x000055f4182654ee runtime.fatalpanic (oci-seccomp-bpf-hook + 0xd84ee)
#7 0x000055f418264e25 runtime.gopanic (oci-seccomp-bpf-hook + 0xd7e25)
#8 0x000055f418262e1d runtime.panicmem (oci-seccomp-bpf-hook + 0xd5e1d)
#9 0x000055f41827be65 runtime.sigpanic (oci-seccomp-bpf-hook + 0xeee65)
#10 0x000055f418378c04 github.com/containers/oci-seccomp-bpf-hook/vendor/github.com/iovisor/gobpf/bcc.(*Module).Close.func1 (oci-seccomp-bpf-hook + 0x1ebc04)
#11 0x000055f4183766c5 github.com/containers/oci-seccomp-bpf-hook/vendor/github.com/iovisor/gobpf/bcc.(*Module).Close (oci-seccomp-bpf-hook + 0x1e96c5)
#12 0x000055f4182993e0 runtime.call16 (oci-seccomp-bpf-hook + 0x10c3e0)
#13 0x000055f418264d59 runtime.gopanic (oci-seccomp-bpf-hook + 0xd7d59)
#14 0x000055f418262e1d runtime.panicmem (oci-seccomp-bpf-hook + 0xd5e1d)
#15 0x000055f41827be65 runtime.sigpanic (oci-seccomp-bpf-hook + 0xeee65)
#16 0x000055f418376bc8 github.com/containers/oci-seccomp-bpf-hook/vendor/github.com/iovisor/gobpf/bcc.(*Module).Load (oci-seccomp-bpf-hook + 0x1e9bc8)
#17 0x000055f418383251 main.runBPFSource (oci-seccomp-bpf-hook + 0x1f6251)
#18 0x000055f418381dfc main.main (oci-seccomp-bpf-hook + 0x1f4dfc)
#19 0x000055f418267aa3 runtime.main (oci-seccomp-bpf-hook + 0xdaaa3)
#20 0x000055f41829ae61 runtime.goexit (oci-seccomp-bpf-hook + 0x10de61)
Sep 07 16:48:54 jurassicpark systemd[1]: [email protected]: Deactivated successfully.
Sep 07 16:48:54 jurassicpark audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@33-342915-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 07 16:48:54 jurassicpark audit: BPF prog-id=0 op=UNLOAD
Sep 07 16:48:54 jurassicpark audit: BPF prog-id=0 op=UNLOAD
Sep 07 16:48:54 jurassicpark audit: BPF prog-id=0 op=UNLOAD
Sep 07 16:48:54 jurassicpark abrt-server[342922]: Deleting problem directory ccpp-2022-09-07-16:48:54.468649-342909 (dup of ccpp-2022-09-06-11:53:05.218638-4796)
Sep 07 16:48:55 jurassicpark systemd[1]: Started dbus-:[email protected].
Sep 07 16:48:55 jurassicpark audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.16-org.freedesktop.problems@18 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 07 16:48:55 jurassicpark abrt-notification[342975]: Process 4796 (oci-seccomp-bpf-hook) crashed in runtime.raise()
Sep 07 16:49:02 jurassicpark oci-seccomp-bpf-hook[342901]: time="2022-09-07T16:49:02+02:00" level=fatal msg="BPF program didn't compile and attach within 10 seconds: please refer to the syslog (e.g., journalctl(1)) for more details"
Sep 07 16:49:02 jurassicpark podman[342838]: 2022-09-07 16:49:02.542602536 +0200 CEST m=+10.404302494 container remove 8835ee1f4356a7e16d8e6e4fa280513747f47c198355540903573a11f12042e3 (image=docker.io/library/bash:latest, name=stupefied_mendel)
Sep 07 16:49:02 jurassicpark systemd[16838]: libpod-conmon-8835ee1f4356a7e16d8e6e4fa280513747f47c198355540903573a11f12042e3.scope: Consumed 1.526s CPU time.
@vrothberg would be great if you could take a second peek? I very much appreciate it.
The anom abend is not a selinux denial, https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-audit_record_types#ftn.footnote-ANOM and the messages in /var/log/audit/audit.log are hence not useful to create a .pp
type=ANOM_ABEND msg=audit(1662559096.567:20940): auid=1000 uid=1000 gid=1000 ses=13 subj=unconfined_u:system_r:container_runtime_t:s0 pid=265335 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1AUID="bernhard" UID="bernhard" GID="bernhard"
type=ANOM_ABEND msg=audit(1662560731.554:24401): auid=1000 uid=1000 gid=1000 ses=13 subj=unconfined_u:system_r:container_runtime_t:s0 pid=308447 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1AUID="bernhard" UID="bernhard" GID="bernhard"
type=ANOM_ABEND msg=audit(1662561541.776:26241): auid=1000 uid=1000 gid=1000 ses=15 subj=unconfined_u:system_r:container_runtime_t:s0 pid=328818 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1AUID="bernhard" UID="bernhard" GID="bernhard"
type=ANOM_ABEND msg=audit(1662561957.851:27138): auid=1000 uid=1000 gid=1000 ses=15 subj=unconfined_u:system_r:container_runtime_t:s0 pid=339011 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1AUID="bernhard" UID="bernhard" GID="bernhard"
type=ANOM_ABEND msg=audit(1662562046.904:27166): auid=1000 uid=1000 gid=1000 ses=15 subj=unconfined_u:system_r:container_runtime_t:s0 pid=342728 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1AUID="bernhard" UID="bernhard" GID="bernhard"
type=ANOM_ABEND msg=audit(1662562133.661:27176): auid=1000 uid=1000 gid=1000 ses=15 subj=unconfined_u:system_r:container_runtime_t:s0 pid=342909 comm="oci-seccomp-bpf" exe="/usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook" sig=6 res=1AUID="bernhard" UID="bernhard" GID="bernhard"
Thanks, @drahnr!
Really hard to say what's going on. Could be a kernel fart or one in bcc-devel. It works on F36. I suggest to keep the issue open and check in a week or two if it's still present.
I tested the exact same CLI as root, and there it works just fine:
# podman run --rm --log-level=info --hooks-dir /usr/share/containers/oci/hooks.d --privileged --security-opt label=disable --annotation io.containers.trace-syscall='of:/tmp/foo7.json' -it bash sh -c 'ls -al'
INFO[0000] podman filtering at log level info
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman.conflist
INFO[0000] Setting parallel job count to 25
INFO[0000] Got pod network &{Name:happy_bardeen Namespace:happy_bardeen ID:afe7ab5e00e58eea914fd711282a4255d5e4e02bf7314b855b16a7b458d352cb NetNS:/run/netns/cni-46f4a80f-2bfe-9e55-748d-1af2bae0872e Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}
INFO[0000] Adding pod happy_bardeen_happy_bardeen to CNI network "podman" (type=bridge)
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-afe7ab5e00e58eea914fd711282a4255d5e4e02bf7314b855b16a7b458d352cb.scope
INFO[0002] Got Conmon PID as 345766
total 0
dr-xr-xr-x 1 root root 114 Feb 5 2022 .
dr-xr-xr-x 1 root root 114 Feb 5 2022 ..
drwxr-xr-x 1 root root 838 Jan 21 2022 bin
drwxr-xr-x 16 root root 4100 Sep 8 08:17 dev
drwxr-xr-x 1 root root 576 Sep 8 08:17 etc
drwxr-xr-x 1 root root 0 Nov 24 2021 home
drwxr-xr-x 1 root root 290 Jan 21 2022 lib
drwxr-xr-x 1 root root 28 Nov 24 2021 media
drwxr-xr-x 1 root root 0 Nov 24 2021 mnt
drwxr-xr-x 1 root root 0 Nov 24 2021 opt
dr-xr-xr-x 354 root root 0 Sep 8 08:17 proc
drwx------ 1 root root 0 Nov 24 2021 root
drwxr-xr-x 1 root root 40 Sep 8 08:17 run
drwxr-xr-x 1 root root 800 Nov 24 2021 sbin
drwxr-xr-x 1 root root 0 Nov 24 2021 srv
dr-xr-xr-x 13 root root 0 Sep 6 08:43 sys
drwxrwxrwt 1 root root 0 Jan 21 2022 tmp
drwxr-xr-x 1 root root 46 Jan 21 2022 usr
drwxr-xr-x 1 root root 86 Jan 21 2022 var
INFO[0002] Got pod network &{Name:happy_bardeen Namespace:happy_bardeen ID:afe7ab5e00e58eea914fd711282a4255d5e4e02bf7314b855b16a7b458d352cb NetNS:/run/netns/cni-46f4a80f-2bfe-9e55-748d-1af2bae0872e Networks:[{Name:podman Ifname:eth0}] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}
INFO[0002] Deleting pod happy_bardeen_happy_bardeen from CNI network "podman" (type=bridge)
# cat /tmp/foo7.json
{"defaultAction":"SCMP_ACT_ERRNO","architectures":["SCMP_ARCH_X86_64"],"syscalls":[{"names":["access","arch_prctl","brk","capset","chdir","close","close_range","dup2","dup3","execve","exit_group","faccessat2","fchdir","fchown","fcntl","fstat","getcwd","getdents64","getegid","geteuid","getgid","getpgid","getpid","getppid","getuid","ioctl","lseek","lstat","mmap","mount","mprotect","munmap","newfstatat","open","openat","pivot_root","prctl","prlimit64","pselect6","read","rt_sigaction","rt_sigprocmask","sendmsg","set_tid_address","sethostname","setresgid","setresuid","setsid","stat","statx","umask","umount2","uname","write","writev"],"action":"SCMP_ACT_ALLOW","args":[],"comment":"","includes":{},"excludes":{}}]}
So this can't be a kernel thingy from my understanding. Is there any cap or other thing that is required to run it as rootless podman container?
Ah, that makes sense. The hook must be run as root at the present. I did not follow the rootless BPF progress in detail but it's likely not being enabled on most distributions.
I will rename the issue. The hook should be able to detect whether it's running in a rootless context or not.
I face similar problems, with a fresh Fedora 37 Desktop ...
When I install the oci-seccomp-bpf-hook package (sudo dnf install ...) and try:
#> sudo podman run --rm --annotation io.containers.trace-syscall=of:/tmp/ls.json fedora:30 ls
I see following Error:
Error: OCI runtime error: crun: error executing hook /usr/libexec/oci/hooks.d/oci-seccomp-bpf-hook (exit code: 1)
When I compile the package like
#> make binary
#> sudo make install
#> sudo podman run --rm --annotation io.containers.trace-syscall=of:/tmp/ls.json fedora:30 ls
No Errors appears but I miss the output file
When I compile the package like
#> make binary
#> sudo make PREFIX=/usr install
#> sudo podman run --rm --annotation io.containers.trace-syscall=of:/tmp/ls.json fedora:30 ls
I eceive the Error shown before
Do I have to go back to Fedora 35?
@itarch, the problem looks very different to me as it doesn't seem to be related to running as a non-root user.
Feel free to open a new issue. Note that the BPF stack is quite fragile at times. The hook can sometimes fail after a kernel update etc. Usually the issues are fixed in a short period of time.
@vrothberg, thank you for feedback, oci-seccomp-bpf-hook works fine with F35 Server. However, I should open a new issue since I'm still facing problems (even after updates) as described above with F37 Desktop. Maybe I should try F37 Server before ...
@vrothberg I guess the fedora gating test failures would also be related: https://artifacts.dev.testing-farm.io/b6ca509c-f060-4245-972c-216cc6fafc0a/
Thanks for sharing, @lsm5. The failures happen for quite a while now. Let's keep an eye on it and if it doesn't work in a week or two, I need to take a look.
I opened https://github.com/containers/oci-seccomp-bpf-hook/pull/121 to log a more useful error than before.