dnsname
dnsname copied to clipboard
'exit status 3' when starting container
Hi, I'm trying to get dnsname working with podman on opensuse microos. The versions of relevant software is:
chost1:~ # podman --version
podman version 2.0.6
chost1:~ # dnsmasq --version
Dnsmasq version 2.82 Copyright (c) 2000-2020 Simon Kelley
When I try to start a container after enabling podman, I get:
chost1:~ # podman run -dt --name web --network cni-podman1 --log-level debug quay.io/libpod/alpine_nginx:latest
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run -dt --name web --network cni-podman1 --log-level debug quay.io/libpod/alpine_nginx:latest)
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/etc/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] containers-default-0.14.10 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] false false false /usr/bin/catatonit private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false map[] [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni] podman /etc/cni/net.d/}}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver btrfs
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "btrfs"
DEBU[0000] Initializing event backend file
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] using runtime "/usr/bin/runc"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
INFO[0000] Found CNI network cni-podman1 (type=bridge) at /etc/cni/net.d/cni-podman1.conflist
WARN[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]quay.io/libpod/alpine_nginx:latest"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]quay.io/libpod/alpine_nginx:latest"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@3ef70f7291f47dfe2b82931a993e16f5a44a0e7a68034c3e0e086d77f5829adc"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]quay.io/libpod/alpine_nginx:latest"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@3ef70f7291f47dfe2b82931a993e16f5a44a0e7a68034c3e0e086d77f5829adc"
DEBU[0000] using systemd mode: false
DEBU[0000] setting container name web
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json"
DEBU[0000] Allocated lock 6 for container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@3ef70f7291f47dfe2b82931a993e16f5a44a0e7a68034c3e0e086d77f5829adc"
DEBU[0000] created container "c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de"
DEBU[0000] container "c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de" has work directory "/var/lib/containers/storage/btrfs-containers/c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de/userdata"
DEBU[0000] container "c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de" has run directory "/var/run/containers/storage/btrfs-containers/c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de/userdata"
DEBU[0000] container "c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de" has CgroupParent "machine.slice/libpod-c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de.scope"
DEBU[0000] mounted container "c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de" at "/var/lib/containers/storage/btrfs/subvolumes/e12c5ce001445caca2392b8a72c81ed3663cd68cf7753d340683a3f8fc748ba6"
DEBU[0000] Created root filesystem for container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de at /var/lib/containers/storage/btrfs/subvolumes/e12c5ce001445caca2392b8a72c81ed3663cd68cf7753d340683a3f8fc748ba6
DEBU[0000] Made network namespace at /var/run/netns/cni-1741c949-7af1-0fec-fa6c-ffd27396f57e for container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de
INFO[0000] About to add CNI network lo (type=loopback)
INFO[0000] Got pod network &{Name:web Namespace:web ID:c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de NetNS:/var/run/netns/cni-1741c949-7af1-0fec-fa6c-ffd27396f57e Networks:[{Name:cni-podman1 Ifname:}] RuntimeConfig:map[cni-podman1:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network cni-podman1 (type=bridge)
ERRO[0000] Error adding network: exit status 3
ERRO[0000] Error while adding pod to CNI network "cni-podman1": exit status 3
DEBU[0000] unmounted container "c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de"
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Cleaning up container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de storage is already unmounted, skipping...
DEBU[0000] ExitCode msg: "error configuring network namespace for container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de: exit status 3"
Error: error configuring network namespace for container c6cd681e6373ebb7294519bfa76cc9a3343d939dee94ad5c39955b8e6077f4de: exit status 3
Here's what my cni network config looks like:
chost1:~ # cat /etc/cni/net.d/cni-podman1.conflist
{
"cniVersion": "0.4.0",
"name": "cni-podman1",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman1",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [
{
"dst": "0.0.0.0/0"
}
],
"ranges": [
[
{
"subnet": "10.89.0.0/24",
"gateway": "10.89.0.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall",
"backend": ""
},
{
"type": "dnsname",
"domainName": "podman.io"
}
]
}
Even with a brand new network with default podman config, the same error happens:
chost1:~ # podman network create
/etc/cni/net.d/cni-podman3.conflist
chost1:~ # podman run -dt --name web --network cni-podman3 --log-level debug quay.io/libpod/alpine_nginx:latest
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run -dt --name web --network cni-podman3 --log-level debug quay.io/libpod/alpine_nginx:latest)
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/etc/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] containers-default-0.14.10 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] false false false /usr/bin/catatonit private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false map[] [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni] podman /etc/cni/net.d/}}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver btrfs
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "btrfs"
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
INFO[0000] Found CNI network cni-podman1 (type=bridge) at /etc/cni/net.d/cni-podman1.conflist
INFO[0000] Found CNI network cni-podman2 (type=bridge) at /etc/cni/net.d/cni-podman2.conflist
INFO[0000] Found CNI network cni-podman3 (type=bridge) at /etc/cni/net.d/cni-podman3.conflist
WARN[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]quay.io/libpod/alpine_nginx:latest"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]quay.io/libpod/alpine_nginx:latest"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@3ef70f7291f47dfe2b82931a993e16f5a44a0e7a68034c3e0e086d77f5829adc"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]quay.io/libpod/alpine_nginx:latest"
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@3ef70f7291f47dfe2b82931a993e16f5a44a0e7a68034c3e0e086d77f5829adc"
DEBU[0000] using systemd mode: false
DEBU[0000] setting container name web
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json"
DEBU[0000] Allocated lock 6 for container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9
DEBU[0000] parsed reference into "[btrfs@/var/lib/containers/storage+/var/run/containers/storage]@3ef70f7291f47dfe2b82931a993e16f5a44a0e7a68034c3e0e086d77f5829adc"
DEBU[0000] created container "3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9"
DEBU[0000] container "3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9" has work directory "/var/lib/containers/storage/btrfs-containers/3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9/userdata"
DEBU[0000] container "3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9" has run directory "/var/run/containers/storage/btrfs-containers/3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9/userdata"
DEBU[0000] container "3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9" has CgroupParent "machine.slice/libpod-3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9.scope"
DEBU[0000] Made network namespace at /var/run/netns/cni-58929a5a-ae3e-0df5-b2b0-2efa579c9c6e for container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9
INFO[0000] About to add CNI network lo (type=loopback)
DEBU[0000] mounted container "3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9" at "/var/lib/containers/storage/btrfs/subvolumes/1b1881f96bf642009c0a7f41f67311c7f89c9176a31f138318f6042cc6e21332"
DEBU[0000] Created root filesystem for container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9 at /var/lib/containers/storage/btrfs/subvolumes/1b1881f96bf642009c0a7f41f67311c7f89c9176a31f138318f6042cc6e21332
INFO[0000] Got pod network &{Name:web Namespace:web ID:3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9 NetNS:/var/run/netns/cni-58929a5a-ae3e-0df5-b2b0-2efa579c9c6e Networks:[{Name:cni-podman3 Ifname:}] RuntimeConfig:map[cni-podman3:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network cni-podman3 (type=bridge)
ERRO[0000] Error adding network: exit status 3
ERRO[0000] Error while adding pod to CNI network "cni-podman3": exit status 3
DEBU[0000] unmounted container "3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9"
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Cleaning up container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] Container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9 storage is already unmounted, skipping...
DEBU[0000] ExitCode msg: "error configuring network namespace for container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9: exit status 3"
Error: error configuring network namespace for container 3c790404a7a5da3c9627dfc4c539ac9a8a9181a12d0a5f34008f5b75a3861be9: exit status 3
chost1:~ # cat /etc/cni/net.d/cni-podman3.conflist
{
"cniVersion": "0.4.0",
"name": "cni-podman3",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman3",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [
{
"dst": "0.0.0.0/0"
}
],
"ranges": [
[
{
"subnet": "10.89.2.0/24",
"gateway": "10.89.2.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall",
"backend": ""
},
{
"type": "dnsname",
"domainName": "dns.podman"
}
]
}
The reason I'm raising this here rather than in the podman issue tracker is that the moment I remove the dnsname
plugin from the cni config everything goes back to working correctly. I've tried both the master branch, and an older commit (17245c6
) to make sure this wasn't introduced in the last few commits in the past 24 hours.
If you need any more debug output let me know and I'll gladly provide.
Thanks, Savoir
thanks for the detailed report @savoiringfaire . I have not been able to replicate this. Have you tried using the latest podman from the main branch?
Hi @baude - no problem, and thanks for the reply. I've tried building podman master from source and still get the same error:
chost1:~/go/src/github.com/containers/podman # make BUILDTAGS="exclude_graphdriver_devicemapper" binaries
Podman is being compiled without the systemd build tag. Install libsystemd on Ubuntu or systemd-devel on rpm based distro for journald support.
go build -mod=vendor -gcflags 'all=-trimpath=/root/go/src/github.com/containers/podman' -asmflags 'all=-trimpath=/root/go/src/github.com/containers/podman' -ldflags '-X github.com/containers/podman/v2/libpod/define.gitCommit=acf86ef5ab9c5307fe0bdc93bf534decaafe38ae -X github.com/containers/podman/v2/libpod/define.buildInfo=1600266985 -X github.com/containers/podman/v2/libpod/config._installPrefix=/usr/local -X github.com/containers/podman/v2/libpod/config._etcDir=/etc ' -tags "exclude_graphdriver_devicemapper" -o bin/podman ./cmd/podman
go build -mod=vendor -gcflags 'all=-trimpath=/root/go/src/github.com/containers/podman' -asmflags 'all=-trimpath=/root/go/src/github.com/containers/podman' -ldflags '-X github.com/containers/podman/v2/libpod/define.gitCommit=acf86ef5ab9c5307fe0bdc93bf534decaafe38ae -X github.com/containers/podman/v2/libpod/define.buildInfo=1600267002 -X github.com/containers/podman/v2/libpod/config._installPrefix=/usr/local -X github.com/containers/podman/v2/libpod/config._etcDir=/etc ' -tags "remote exclude_graphdriver_btrfs btrfs_noversion exclude_graphdriver_devicemapper containers_image_openpgp" -o bin/podman-remote ./cmd/podman
chost1:~/go/src/github.com/containers/podman # ./bin/podman start 4a0686c40a90
ERRO[0000] Error adding network: exit status 3
ERRO[0000] Error while adding pod to CNI network "podman": exit status 3
Error: unable to start container "4a0686c40a90dc397e17127b3639193ed09337ca2e14f24dac0f914fd0aa27e5": error configuring network namespace for container 4a0686c40a90dc397e17127b3639193ed09337ca2e14f24dac0f914fd0aa27e5: exit status 3
chost1:~/go/src/github.com/containers/podman # ./bin/podman --version
podman version 2.1.0-dev
Also tried creating a new network and then a new container using that network using the master branch with no luck (same error) either.
Thanks, Savoir
any way i can access or replicate your environment precisely?
@baude sure, it's just a personal server so more than happy for you to go in and have a poke around. I've added your public key from github - what's the best way of sending the IP to you? Via [email protected]?
Thanks, Savoir
sure, or irc...
type=AVC msg=audit(1600285322.958:886): apparmor="DENIED" operation="open" profile="/usr/sbin/dnsmasq" name="/run/containers/cni/dnsname/cni-podman1/dnsmasq.conf" pid=4656 comm="dnsmasq" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
@vrothberg does suse provide the dnsplugin as a package? in this case, it was compiled and added manually.
@vrothberg does suse provide the dnsplugin as a package? in this case, it was compiled and added manually.
I've been poking around build.opensuse.org but could not find any reference on dnsname. @saschagrunert @sysrich, to double check: is dnsname shipped in OpenSUSE?
is dnsname shipped in OpenSUSE?
No I don't think so.
should it be? and then add the proper apparmor stuffs?
I never had a use case for this, but yeah it might be a good addition. WDYT @sysrich, @rhafer?
Incidently I just could just make good use of this :-). So I created a package: https://build.opensuse.org/request/show/838590 Still need to work on the apparmor denials, but that shouldn't be much of a problem.
@savoiringfaire If you're interested in testing it should appear in https://download.opensuse.org/repositories/devel:/kubic/ soon :tm:
@rhafer oh, great, thanks! :grin: I was working on my own one but it's my first time working with OBS so took me longer than I was hoping. I was playing around with AppArmor rules and as far as I can tell (through looking at the audit log and a look through the dnsname source), the only rule that needs to be added to dnsmasq on opensuse is:
/run/user/*/containers/cni/dnsname/*/* rw,
In theory I'd be happy to add a request somewhere to get that in, but it looks like (based on https://en.opensuse.org/SDB:AppArmor) that adding to the official apparmor-profiles repo needs a request to a mailing list somewhere. As I'm not even sure that's the proper way to go (based on the fact that this is a rule to add to dnsmasq but it's actually to make dnsname work - and I wasn't sure if two profiles targeting the same executable would work) it's probably better for me to leave it to someone more experienced in apparmor packaging to put it through. I'd much appreciate a heads-up once it's done by whoever does it so I can see what the proper way of doing it was, for my education :grinning:
@savoiringfaire You could just try to submit a PR to: https://gitlab.com/apparmor/apparmor
I think you'd also need to add:
/run/containers/cni/dnsname/*/* rw
depending on how podman is started it might also pick that directory.
may i close this issue?
/run/user/*/containers/cni/dnsname/*/* rw,
Thanks, @savoiringfaire. I use OpenSUSE Tumbleweed, and after an os update, I got this error.
ERRO[0000] error loading cached network config: network "<project>" not found in CNI cache
WARN[0000] falling back to loading from existing plugins on disk
Error: unable to start container <cid>: error configuring network namespace for container <cid>: error adding pod <name> to CNI network "<name>": dnsname error: dnsmasq failed with "\ndnsmasq: cannot read /run/user/1000/containers/cni/dnsname/<project>/dnsmasq.conf: Permission denied\n": exit status 3
exit code: 125
Your solution solved my issue. Thanks :rocket:
@baude @rhatdan I think this can be closed. (It's even been addressed in apparmor meanwhile https://gitlab.com/apparmor/apparmor/-/merge_requests/909)