toolbox
toolbox copied to clipboard
Container doesn't start because 'group for sudo not found'
Describe the bug I have two containers with fedora 31 images. None of them can be started. Here is one of them
7ae8d51ca24f pymol 2 years ago exited registry.fedoraproject.org/f31/fedora-toolbox:31
toolbox enter pymol -v
DEBU Running as real user ID 1000
DEBU Resolved absolute path to the executable as /usr/bin/toolbox
DEBU Running on a cgroups v2 host
DEBU Checking if /etc/subgid and /etc/subuid have entries for user ocelot
DEBU Validating sub-ID file /etc/subuid
DEBU Validating sub-ID file /etc/subgid
DEBU TOOLBOX_PATH is /usr/bin/toolbox
DEBU Migrating to newer Podman
DEBU Toolbox config directory is /home/ocelot/.config/toolbox
DEBU Current Podman version is 3.4.4
DEBU Creating runtime directory /run/user/1000/toolbox
DEBU Old Podman version is 3.4.4
DEBU Migration not needed: Podman version 3.4.4 is unchanged
DEBU Setting up configuration
DEBU Setting up configuration: file /home/ocelot/.config/containers/toolbox.conf not found
DEBU Resolving image name
DEBU Distribution (CLI): ''
DEBU Image (CLI): ''
DEBU Release (CLI): ''
DEBU Resolved image name
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolving container name
DEBU Container: ''
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolved container name
DEBU Container: 'fedora-toolbox-34'
DEBU Resolving image name
DEBU Distribution (CLI): ''
DEBU Image (CLI): ''
DEBU Release (CLI): ''
DEBU Resolved image name
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolving container name
DEBU Container: 'pymol'
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolved container name
DEBU Container: 'pymol'
DEBU Checking if container pymol exists
DEBU Inspecting mounts of container pymol
DEBU Requires org.freedesktop.Flatpak.SessionHelper
DEBU Calling org.freedesktop.Flatpak.SessionHelper.RequestSession
DEBU Starting container pymol
DEBU Inspecting entry point of container pymol
DEBU Entry point PID is a float64
DEBU Entry point of container pymol is toolbox (PID=0)
Error: invalid entry point PID of container pymol
Podman log shows that the container can't be started because Error: failed to get group for sudo: group for sudo not found
podman start --attach pymol --log-level debug
INFO[0000] podman filtering at log level debug
DEBU[0000] Called start.PersistentPreRunE(podman start --attach pymol --log-level debug)
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/home/ocelot/.config/containers/containers.conf"
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/ocelot/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/ocelot/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/ocelot/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/ocelot/.local/share/containers/storage/volumes
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Found CNI network podman (type=bridge) at /home/ocelot/.config/cni/net.d/87-podman.conflist
DEBU[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 97
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] overlay: mount_data=,lowerdir=/home/ocelot/.local/share/containers/storage/overlay/l/4Q3SR3VJ5DIERAZJ4ETCE4G6LJ:/home/ocelot/.local/share/containers/storage/overlay/l/JA36CM7NR3LTZ6QYUZE6DSXVQS,upperdir=/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/diff,workdir=/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/work,context="system_u:object_r:container_file_t:s0:c184,c662"
DEBU[0000] mounted container "7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9" at "/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/merged"
DEBU[0000] Created root filesystem for container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 at /home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/merged
DEBU[0000] Not modifying container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 /etc/passwd
DEBU[0000] Not modifying container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 /etc/group
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting CGroup path for container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 to user.slice/libpod-7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9
DEBU[0000] set root propagation to "rslave"
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/merged"
DEBU[0000] Created OCI spec for container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 at /home/ocelot/.local/share/containers/storage/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 -u 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 -r /usr/bin/crun -b /home/ocelot/.local/share/containers/storage/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata -p /run/user/1000/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/pidfile -n pymol --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -l k8s-file:/home/ocelot/.local/share/containers/storage/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/ocelot/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9]"
INFO[0000] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup path user.slice/conmon: open /sys/fs/cgroup/cgroup.subtree_control: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
DEBU[0000] Received: -1
DEBU[0000] Cleaning up container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] unmounted container "7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9"
Error: unable to start container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9: create `/sys/fs/cgroup/user.slice/libpod-7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9`: Permission denied: OCI permission denied
[ocelot@yellowtrain ~]$ rm .config/containers/containers.conf
[ocelot@yellowtrain ~]$ podman start --attach pymol --log-level debug
INFO[0000] podman filtering at log level debug
DEBU[0000] Called start.PersistentPreRunE(podman start --attach pymol --log-level debug)
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/ocelot/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/ocelot/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/ocelot/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/ocelot/.local/share/containers/storage/volumes
DEBU[0000] overlay: storage already configured with a mount-program
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Found CNI network podman (type=bridge) at /home/ocelot/.config/cni/net.d/87-podman.conflist
DEBU[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 97
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] overlay: mount_data=,lowerdir=/home/ocelot/.local/share/containers/storage/overlay/l/4Q3SR3VJ5DIERAZJ4ETCE4G6LJ:/home/ocelot/.local/share/containers/storage/overlay/l/JA36CM7NR3LTZ6QYUZE6DSXVQS,upperdir=/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/diff,workdir=/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/work,context="system_u:object_r:container_file_t:s0:c184,c662"
DEBU[0000] mounted container "7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9" at "/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/merged"
DEBU[0000] Created root filesystem for container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 at /home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/merged
DEBU[0000] Not modifying container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 /etc/passwd
DEBU[0000] Not modifying container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 /etc/group
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[0000] Setting CGroups for container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 to user.slice:libpod:7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9
DEBU[0000] set root propagation to "rslave"
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Workdir "/" resolved to host path "/home/ocelot/.local/share/containers/storage/overlay/a2003efb1d225fa519c5d2f74737a05466a714e1070f198f8b7b8e49fe906ba2/merged"
DEBU[0000] Created OCI spec for container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 at /home/ocelot/.local/share/containers/storage/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/config.json
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -c 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 -u 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 -r /usr/bin/crun -b /home/ocelot/.local/share/containers/storage/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata -p /run/user/1000/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/pidfile -n pymol --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l k8s-file:/home/ocelot/.local/share/containers/storage/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/overlay-containers/7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/ocelot/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
DEBU[0000] Received: 739793
INFO[0000] Got Conmon PID as 739785
DEBU[0000] Created container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 in OCI runtime
DEBU[0000] Attaching to container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9
DEBU[0000] Starting container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9 with command [toolbox --verbose init-container --home /home/ocelot --monitor-host --shell /bin/bash --uid 1000 --user ocelot]
DEBU[0000] Started container 7ae8d51ca24fbd80811718f546f949d35724f37a68a3f27a6b4d32b43120bbf9
DEBU[0000] Enabling signal proxying
level=debug msg="Running as real user ID 0"
level=debug msg="Resolved absolute path to the executable as /usr/bin/toolbox"
level=debug msg="TOOLBOX_PATH is /usr/bin/toolbox"
level=debug msg="Migrating to newer Podman"
level=debug msg="Setting up configuration"
level=debug msg="Setting up configuration: file /etc/containers/toolbox.conf not found"
level=debug msg="Setting up configuration: file /root/.config/containers/toolbox.conf not found"
level=debug msg="Resolving image name"
level=debug msg="Distribution (CLI): ''"
level=debug msg="Image (CLI): ''"
level=debug msg="Release (CLI): ''"
level=debug msg="Resolved image name"
level=debug msg="Image: 'fedora-toolbox:31'"
level=debug msg="Release: '31'"
level=debug msg="Resolving container name"
level=debug msg="Container: ''"
level=debug msg="Image: 'fedora-toolbox:31'"
level=debug msg="Release: '31'"
level=debug msg="Resolved container name"
level=debug msg="Container: 'fedora-toolbox-31'"
level=debug msg="XDG_RUNTIME_DIR is unset"
level=debug msg="XDG_RUNTIME_DIR set to /run/user/1000"
level=debug msg="Creating /run/.toolboxenv"
level=debug msg="Monitoring host"
level=debug msg="Path /run/host/etc exists"
level=debug msg="Resolved /etc/localtime to /run/host/usr/share/zoneinfo/America/Chicago"
level=debug msg="Creating regular file /etc/machine-id"
level=debug msg="Binding /etc/machine-id to /run/host/etc/machine-id"
level=debug msg="Creating directory /run/libvirt"
level=debug msg="Binding /run/libvirt to /run/host/run/libvirt"
level=debug msg="Creating directory /run/systemd/journal"
level=debug msg="Binding /run/systemd/journal to /run/host/run/systemd/journal"
level=debug msg="Creating directory /run/systemd/resolve"
level=debug msg="Binding /run/systemd/resolve to /run/host/run/systemd/resolve"
level=debug msg="Creating directory /run/udev/data"
level=debug msg="Binding /run/udev/data to /run/host/run/udev/data"
level=debug msg="Creating directory /tmp"
level=debug msg="Binding /tmp to /run/host/tmp"
level=debug msg="Creating directory /var/lib/flatpak"
level=debug msg="Binding /var/lib/flatpak to /run/host/var/lib/flatpak"
level=debug msg="Creating directory /var/lib/libvirt"
level=debug msg="Binding /var/lib/libvirt to /run/host/var/lib/libvirt"
level=debug msg="Creating directory /var/lib/systemd/coredump"
level=debug msg="Binding /var/lib/systemd/coredump to /run/host/var/lib/systemd/coredump"
level=debug msg="Creating directory /var/log/journal"
level=debug msg="Binding /var/log/journal to /run/host/var/log/journal"
level=debug msg="Creating directory /sys/fs/selinux"
level=debug msg="Binding /sys/fs/selinux to /usr/share/empty"
level=debug msg="Looking up group for sudo"
Error: failed to get group for sudo: group for sudo not found
DEBU[0000] Called start.PersistentPostRunE(podman start --attach pymol --log-level debug)
Expected behaviour Clearly the container ought to work...
Actual behaviour It doesn't work and gives a dubious error that's actually not helpful unless you look at the podman log
Screenshots If applicable, add screenshots to help explain your problem.
Output of toolbox --version
(v0.0.90+)
toolbox version 0.0.99.3
Toolbox package info (rpm -q toolbox
)
toolbox-0.0.99.3-2.fc34.x86_64
Output of podman version
Version: 3.4.4
API Version: 3.4.4
Go Version: go1.16.8
Built: Wed Dec 8 15:45:01 2021
OS/Arch: linux/amd64
Podman package info (rpm -q podman
)
podman-1.9.2-1.fc32.x86_64
Info about your OS
.',;::::;,'. ocelot@yellowtrain
.';:cccccccccccc:;,. ------------------
.;cccccccccccccccccccccc;. OS: Fedora 34 (Workstation Edition) x86_64
.:cccccccccccccccccccccccccc:. Kernel: 5.16.20-100.fc34.x86_64
.;ccccccccccccc;.:dddl:.;ccccccc;. Uptime: 19 hours, 39 mins
.:ccccccccccccc;OWMKOOXMWd;ccccccc:. Packages: 3168 (rpm), 29 (flatpak)
.:ccccccccccccc;KMMc;cc;xMMc:ccccccc:. Shell: bash 5.1.0
,cccccccccccccc;MMM.;cc;;WW::cccccccc, Resolution: 2560x1440
:cccccccccccccc;MMM.;cccccccccccccccc: DE: GNOME 40.9
:ccccccc;oxOOOo;MMM0OOk.;cccccccccccc: WM: Mutter
cccccc:0MMKxdd:;MMMkddc.;cccccccccccc; WM Theme: Adwaita
ccccc:XM0';cccc;MMM.;cccccccccccccccc' Theme: Adwaita [GTK2/3]
ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Icons: Adwaita [GTK2/3]
ccccc;0MNc.ccc.xMMd:ccccccccccccccc; Terminal: gnome-terminal
cccccc;dNMWXXXWM0::cccccccccccccc:, CPU: AMD Ryzen Threadripper 2950X (32) @ 3.500GHz
cccccccc;.:odl:.;cccccccccccccc:,. GPU: AMD ATI Radeon VII
:cccccccccccccccccccccccccccc:'. Memory: 31197MiB / 64308MiB
.:cccccccccccccccccccccc:;,..
'::cccccccccccccc::;,.
Additional context
I remember that when I first upgraded to Fedora 34 from 33, that I could still start and enter the two containers that used the fedora 31 image...I don't know when they broke, only recently did I need to run them again and find that they are not working...Not good...
toolbox list -i
IMAGE ID IMAGE NAME CREATED
3a3fb0a29265 registry.fedoraproject.org/f31/fedora-toolbox:31 2 years ago
e6d38a7d896c registry.fedoraproject.org/fedora-toolbox:34 9 months ago
podman info
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.0.32-1.fc34.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.32, commit: '
cpus: 32
distribution:
distribution: fedora
variant: workstation
version: "34"
eventLogger: journald
hostname: yellowtrain
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.16.20-100.fc34.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 3386540032
memTotal: 67432009728
ociRuntime:
name: crun
package: crun-1.4.4-1.fc34.x86_64
path: /usr/bin/crun
version: |-
crun version 1.4.4
commit: 6521fcc5806f20f6187eb933f9f45130c86da230
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.12-2.fc34.x86_64
version: |-
slirp4netns version 1.1.12
commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 77150281728
swapTotal: 77151068160
uptime: 19h 48m 54.83s (Approximately 0.79 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /home/ocelot/.config/containers/storage.conf
containerStore:
number: 4
paused: 0
running: 1
stopped: 3
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.7.1-2.fc34.x86_64
Version: |-
fusermount3 version: 3.10.4
fuse-overlayfs: version 1.7.1
FUSE library version 3.10.4
using FUSE kernel interface version 7.31
graphRoot: /home/ocelot/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 2
runRoot: /run/user/1000
volumePath: /home/ocelot/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.4
Built: 1638999901
BuiltTime: Wed Dec 8 15:45:01 2021
GitCommit: ""
GoVersion: go1.16.8
OsArch: linux/amd64
Version: 3.4.4
Hi @Fatmice! If you try to create a new container based on the Fedora 31 image, does it also work?
Could you, please, post the output of the following? I'd like to check whether the wheel
group is present in the container.
$ podman unshare bash
$ cd $(podman mount pymol)
$ cat /etc/group
@containers/podman-maintainers Sorry for the mass tag (how should I call for your presence?). Would you have any ideas? In the log I see an OCI permission denied
that the user tried to fix by deleting their user containers configuration file. Is that a problem?
@containers/podman-maintainers Sorry for the mass tag (how should I call for your presence?). Would you have any ideas? In the log I see an
OCI permission denied
that the user tried to fix by deleting their user containers configuration file. Is that a problem?
I have not knowningly deleted anything...I don't recall messing with anything in the container
Hi @Fatmice! If you try to create a new container based on the Fedora 31 image, does it also work?
No change in outcome
toolbox create test --verbose --image registry.fedoraproject.org/f31/fedora-toolbox:31
DEBU Running as real user ID 1000
DEBU Resolved absolute path to the executable as /usr/bin/toolbox
DEBU Running on a cgroups v2 host
DEBU Checking if /etc/subgid and /etc/subuid have entries for user ocelot
DEBU Validating sub-ID file /etc/subuid
DEBU Validating sub-ID file /etc/subgid
DEBU TOOLBOX_PATH is /usr/bin/toolbox
DEBU Migrating to newer Podman
DEBU Toolbox config directory is /home/ocelot/.config/toolbox
DEBU Current Podman version is 3.4.4
DEBU Creating runtime directory /run/user/1000/toolbox
DEBU Old Podman version is 3.4.4
DEBU Migration not needed: Podman version 3.4.4 is unchanged
DEBU Setting up configuration
DEBU Setting up configuration: file /home/ocelot/.config/containers/toolbox.conf not found
DEBU Resolving image name
DEBU Distribution (CLI): ''
DEBU Image (CLI): ''
DEBU Release (CLI): ''
DEBU Resolved image name
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolving container name
DEBU Container: ''
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolved container name
DEBU Container: 'fedora-toolbox-34'
DEBU Resolving image name
DEBU Distribution (CLI): ''
DEBU Image (CLI): 'registry.fedoraproject.org/f31/fedora-toolbox:31'
DEBU Release (CLI): ''
DEBU Resolved image name
DEBU Image: 'registry.fedoraproject.org/f31/fedora-toolbox:31'
DEBU Release: '31'
DEBU Resolving container name
DEBU Container: 'test'
DEBU Image: 'registry.fedoraproject.org/f31/fedora-toolbox:31'
DEBU Release: '31'
DEBU Resolved container name
DEBU Container: 'test'
DEBU Checking if container test already exists
DEBU Looking for image registry.fedoraproject.org/f31/fedora-toolbox:31
DEBU Resolving fully qualified name for image registry.fedoraproject.org/f31/fedora-toolbox:31 from RepoTags
DEBU Resolved image registry.fedoraproject.org/f31/fedora-toolbox:31 to registry.fedoraproject.org/f31/fedora-toolbox:31
DEBU Checking if 'podman create' supports '--mount type=devpts'
DEBU 'podman create' supports '--mount type=devpts'
DEBU Checking if 'podman create' supports '--ulimit host'
DEBU 'podman create' supports '--ulimit host'
DEBU Resolving path to the D-Bus system socket
DEBU /home/ocelot canonicalized to /home/ocelot
DEBU Resolving path to the Avahi socket
DEBU Resolving path to the KCM socket
DEBU Resolving path to the pcsc socket
DEBU Checking if /media is a symbolic link to /run/media
DEBU Checking if /mnt is a symbolic link to /var/mnt
DEBU Looking for toolbox.sh
DEBU Found /etc/profile.d/toolbox.sh
DEBU Checking if /home is a symbolic link to /var/home
DEBU Creating container test:
DEBU podman
DEBU --log-level
DEBU error
DEBU create
DEBU --dns
DEBU none
DEBU --env
DEBU TOOLBOX_PATH=/usr/bin/toolbox
DEBU --env
DEBU XDG_RUNTIME_DIR=/run/user/1000
DEBU --hostname
DEBU toolbox
DEBU --ipc
DEBU host
DEBU --label
DEBU com.github.containers.toolbox=true
DEBU --mount
DEBU type=devpts,destination=/dev/pts
DEBU --name
DEBU test
DEBU --network
DEBU host
DEBU --no-hosts
DEBU --pid
DEBU host
DEBU --privileged
DEBU --security-opt
DEBU label=disable
DEBU --ulimit
DEBU host
DEBU --userns
DEBU keep-id
DEBU --user
DEBU root:root
DEBU --volume
DEBU /:/run/host:rslave
DEBU --volume
DEBU /dev:/dev:rslave
DEBU --volume
DEBU /run/dbus/system_bus_socket:/run/dbus/system_bus_socket
DEBU --volume
DEBU /home/ocelot:/home/ocelot:rslave
DEBU --volume
DEBU /usr/bin/toolbox:/usr/bin/toolbox:ro
DEBU --volume
DEBU /run/user/1000:/run/user/1000
DEBU --volume
DEBU /run/avahi-daemon/socket:/run/avahi-daemon/socket
DEBU --volume
DEBU /run/.heim_org.h5l.kcm-socket:/run/.heim_org.h5l.kcm-socket
DEBU --volume
DEBU /media:/media:rslave
DEBU --volume
DEBU /mnt:/mnt:rslave
DEBU --volume
DEBU /run/pcscd/pcscd.comm:/run/pcscd/pcscd.comm
DEBU --volume
DEBU /run/media:/run/media:rslave
DEBU --volume
DEBU /etc/profile.d/toolbox.sh:/etc/profile.d/toolbox.sh:ro
DEBU registry.fedoraproject.org/f31/fedora-toolbox:31
DEBU toolbox
DEBU --log-level
DEBU debug
DEBU init-container
DEBU --gid
DEBU 1000
DEBU --home
DEBU /home/ocelot
DEBU --shell
DEBU /bin/bash
DEBU --uid
DEBU 1000
DEBU --user
DEBU ocelot
DEBU --monitor-host
Created container: test
Enter with: toolbox enter test
[ocelot@yellowtrain ~]$ toolbox enter test
Error: invalid entry point PID of container test
Could you, please, post the output of the following? I'd like to check whether the
wheel
group is present in the container.$ podman unshare bash $ cd $(podman mount pymol) $ cat /etc/group
[ocelot@yellowtrain ~]$ podman unshare bash
[root@yellowtrain ~]# cd $(podman mount pymol)
[root@yellowtrain merged]# cat /etc/group
root:x:0:
bin:x:1:
daemon:x:2:
sys:x:3:
adm:x:4:
tty:x:5:
disk:x:6:
lp:x:7:
mem:x:8:
kmem:x:9:
wheel:x:10:ocelot
cdrom:x:11:
mail:x:12:
man:x:15:
dialout:x:18:
floppy:x:19:
games:x:20:
tape:x:33:
video:x:39:
ftp:x:50:
lock:x:54:
audio:x:63:
users:x:100:
nobody:x:65534:
dbus:x:81:
utmp:x:22:
utempter:x:35:
input:x:999:
kvm:x:36:qemu
render:x:998:
systemd-journal:x:190:
systemd-coredump:x:997:
systemd-network:x:192:
systemd-resolve:x:193:
tss:x:59:
polkitd:x:996:
dip:x:40:
printadmin:x:995:
gluster:x:994:
rtkit:x:172:
pulse-access:x:993:
pulse-rt:x:992:
pulse:x:171:
brlapi:x:991:
qemu:x:107:
nm-openconnect:x:990:
unbound:x:989:
usbmuxd:x:113:
chrony:x:988:
geoclue:x:987:
avahi:x:70:
pipewire:x:986:
saslauth:x:76:
dnsmasq:x:985:
radvd:x:75:
rpc:x:32:
ssh_keys:x:984:
libvirt:x:983:
openvpn:x:982:
nm-openvpn:x:981:
abrt:x:173:
apache:x:48:
colord:x:980:
rpcuser:x:29:
gdm:x:42:
gnome-initial-setup:x:979:
sshd:x:74:
slocate:x:21:
vboxsf:x:978:
tcpdump:x:72:
ocelot:x:1000:ocelot
systemd-timesync:x:977:
screen:x:84:
jackuser:x:976:
flatpak:x:975:
firebird:x:974:
wbpriv:x:88:
deluge:x:973:
akmods:x:972:
vboxusers:x:971:
power:x:970:
parsec:x:969:
parsec-clients:x:968:parsec
systemd-oom:x:967:
rtlsdr:x:966:
sgx:x:965:
I have not knowningly deleted anything...I don't recall messing with anything in the container
I was referring to this snippet:
[ocelot@yellowtrain ~]$ rm .config/containers/containers.conf
No change in outcome
Then we can most likely cross out a problem with Podman itself.
[root@yellowtrain merged]# cat /etc/group ... wheel:x:10:ocelot ...
Okay, the group is there, so we can also cross out some problem there.
First of all, thank you for the extra info. At this moment I don't have an answer but I'll try to reproduce with a Fedora 31 image. But bare in mind that Fedora 31 is long EOL and can't warrant that much. But it would be great if we got this working.
I don't know when they broke.
:(
First of all, thank you for the extra info. At this moment I don't have an answer but I'll try to reproduce with a Fedora 31 image. But bare in mind that Fedora 31 is long EOL and can't warrant that much. But it would be great if we got this working.
Isn't that the point of a container though? I setup these containers long ago with the environments needed to run that software and it ran fine...until it didn't. It's a self-contained environment that ought to run while things outside it changes. The point is these were setup when Fedora 31 was not EOL.
CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
228819d3e3ef nuclearcraft 2 years ago exited registry.fedoraproject.org/f31/fedora-toolbox:31
7ae8d51ca24f pymol 2 years ago exited registry.fedoraproject.org/f31/fedora-toolbox:31
I have not knowningly deleted anything...I don't recall messing with anything in the container
I was referring to this snippet:
[ocelot@yellowtrain ~]$ rm .config/containers/containers.conf
This was something that others did to try and get the container running. I remember reading this from some other bug thread on here where they were told to try it.
Just tried to reproduce on Rawhide but to no avail, the container starts. I just realized you're running Fedora 34, so I'll have to rebase first to that version to try to reproduce.
Also, I've noticed you're running Podman v3.4.4 but your rpm is back from Fedora 32 days - podman-1.9.2-1.fc32.x86_64
. What's the story there?
Also, I've noticed you're running Podman v3.4.4 but your rpm is back from Fedora 32 days -
podman-1.9.2-1.fc32.x86_64
. What's the story there?
Doesn't make a difference. There's no story. podman-3:3.4.7-1.fc34.x86_64
doesn't change the outcome.
Hi. I found this issue when my podman-toolbox
package in Debian Testing got upgraded to 0.0.99.3-1. Once I downgraded that package, and that package alone, back to 0.0.99.2-2 I was able to enter my containers once again.
Sorry to jump in but I think I found something when trying to make an image based on OpenSUSE to start:
$ podman unshare bash $ cd $(podman mount pymol) $ cat /etc/group
This will list the groups in the host machine, not the image itself. To get the list of groups in the image you need to use cat etc/group
(notice the missing /
at the start of the path).
The "wheel" group is also missing in the OpenSUSE image, so I had to created a new Containerfile using the image as base and add a RUN groupadd wheel
to make it work.
Hello, I have a very similar problem. Seems something included in the last release, 0.0.99.3, because the previous one (0.0.99.2) works fine. fedora 34 (default) works as expected:
➜ alex@alextop ~ toolbox create
Image required to create toolbox container.
Download registry.fedoraproject.org/fedora-toolbox:34 (500MB)? [y/N]: y
Created container: fedora-toolbox-34
Enter with: toolbox enter
➜ alex@alextop ~ toolbox enter
/bin/sh: line 1: /bin/zsh: No such file or directory
Error: command /bin/zsh not found in container fedora-toolbox-34
Using /bin/bash instead.
⬢[alex@toolbox ~]$ cat /etc/hostname
toolbox⬢[alex@toolbox ~]$
But fedora 35 fails:
Image required to create toolbox container.
Download registry.fedoraproject.org/fedora-toolbox:35 (500MB)? [y/N]: y
Created container: fedora-toolbox-35
Enter with: toolbox enter fedora-toolbox-35
➜ alex@alextop ~ toolbox enter fedora-toolbox-35
Error: invalid entry point PID of container fedora-toolbox-35
➜ alex@alextop ~ toolbox --verbose enter fedora-toolbox-35
DEBU Running as real user ID 1000
DEBU Resolved absolute path to the executable as /usr/bin/toolbox
DEBU Running on a cgroups v2 host
DEBU Checking if /etc/subgid and /etc/subuid have entries for user alex
DEBU Validating sub-ID file /etc/subuid
DEBU Validating sub-ID file /etc/subgid
DEBU TOOLBOX_PATH is /usr/bin/toolbox
DEBU Migrating to newer Podman
DEBU Toolbox config directory is /home/alex/.config/toolbox
DEBU Current Podman version is 3.4.7
DEBU Creating runtime directory /run/user/1000/toolbox
DEBU Old Podman version is 3.4.7
DEBU Migration not needed: Podman version 3.4.7 is unchanged
DEBU Setting up configuration
DEBU Setting up configuration: file /home/alex/.config/containers/toolbox.conf not found
DEBU Resolving image name
DEBU Distribution (CLI): ''
DEBU Image (CLI): ''
DEBU Release (CLI): ''
DEBU Resolved image name
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolving container name
DEBU Container: ''
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolved container name
DEBU Container: 'fedora-toolbox-34'
DEBU Resolving image name
DEBU Distribution (CLI): ''
DEBU Image (CLI): ''
DEBU Release (CLI): ''
DEBU Resolved image name
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolving container name
DEBU Container: 'fedora-toolbox-35'
DEBU Image: 'fedora-toolbox:34'
DEBU Release: '34'
DEBU Resolved container name
DEBU Container: 'fedora-toolbox-35'
DEBU Checking if container fedora-toolbox-35 exists
DEBU Inspecting mounts of container fedora-toolbox-35
DEBU Starting container fedora-toolbox-35
DEBU Inspecting entry point of container fedora-toolbox-35
DEBU Entry point PID is a float64
DEBU Entry point of container fedora-toolbox-35 is toolbox (PID=0)
Error: invalid entry point PID of container fedora-toolbox-35
➜ alex@alextop ~ podman logs fedora-toolbox-35
Error: failed to get the current user: user: lookup userid 0: invalid argument
Same error with fedora 36, but fedora 33 works.
Specs:
OS: fedora 34
➜ alex@alextop ~ podman version
Version: 3.4.7
API Version: 3.4.7
Go Version: go1.16.15
Built: Thu Apr 21 19:38:09 2022
OS/Arch: linux/amd64
➜ alex@alextop ~ toolbox --version
toolbox version 0.0.99.3
UPDATE:
I tried 0.0.99.2 downloading the release artifact and compiling it manually
cd src
go build
Doing the same with the downloaded 0.0.99.3 also works, so, seems the problem is related with the rpm
Sorry to jump in but I think I found something when trying to make an image based on OpenSUSE to start:
$ podman unshare bash $ cd $(podman mount pymol) $ cat /etc/group
This will list the groups in the host machine, not the image itself. To get the list of groups in the image you need to use
cat etc/group
(notice the missing/
at the start of the path).
That's a really good point, @jbiason !
@Fatmice do you still have the pymol
container based on Fedora 31 that stopped working? If so, can you please try:
$ podman unshare bash
$ cd $(podman mount pymol)
$ cat etc/group # NB: it doesn't start with a /
...
➜ alex@alextop ~ podman logs fedora-toolbox-35 Error: failed to get the current user: user: lookup userid 0: invalid argument
@Alex-Izquierdo that's https://github.com/containers/toolbox/issues/1001
@Fatmice do you still have the
pymol
container based on Fedora 31 that stopped working? If so, can you please try:$ podman unshare bash $ cd $(podman mount pymol) $ cat etc/group # NB: it doesn't start with a / ...
@debarshiray
[root@yellowtrain merged]# cat etc/group
root:x:0:
bin:x:1:
daemon:x:2:
sys:x:3:
adm:x:4:
tty:x:5:
disk:x:6:
lp:x:7:
mem:x:8:
kmem:x:9:
wheel:x:10:ocelot
cdrom:x:11:
mail:x:12:
man:x:15:
dialout:x:18:
floppy:x:19:
games:x:20:
tape:x:33:
video:x:39:
ftp:x:50:
lock:x:54:
audio:x:63:
users:x:100:
nobody:x:65534:
utmp:x:22:
utempter:x:35:
input:x:999:
kvm:x:36:
render:x:998:
systemd-journal:x:190:
systemd-coredump:x:997:
systemd-network:x:192:
systemd-resolve:x:193:
dbus:x:81:
systemd-timesync:x:996:
ssh_keys:x:995:
slocate:x:21:
tcpdump:x:72:
ocelot:x:1000:
tss:x:59:
unbound:x:994: