Exit policy for Quadlet pods is stop instead of continue (the default)
Issue Description
The generated systemd service for a .pod file creates the pod with the --exit-policy=stop argument, meaning it will be deleted when the last container of the pod exits (except infra).
This is a change of behavior from the default, and it is not documented.
Steps to reproduce the issue
- Create these files in
~/.config/containers/systemd
- foo.pod
[Pod] PodName=foobar Network=pasta [Install] WantedBy=default.target - foo.container
[Container] ContainerName=foo Pod=foobar.pod Image=docker.io/library/alpine:latest Exec=/bin/echo foo container command [Service] Type=oneshot RemainAfterExit=yes Restart=on-failure RestartSec=3 [Install] WantedBy=default.target
- Restart the systemd default target
systemctl --user restart default.target
Describe the results you received
The foobar pod is deleted once the foo container exits.
Describe the results you expected
The foobar pod is kept alive after the foo container exits.
podman info output
host:
arch: amd64
buildahVersion: 1.39.2
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-1:2.1.13-1
path: /usr/bin/conmon
version: 'conmon version 2.1.13, commit: 82de887596ed8ee6d9b2ee85e4f167f307bb569b'
cpuUtilization:
idlePercent: 98.05
systemPercent: 0.39
userPercent: 1.55
cpus: 16
databaseBackend: sqlite
distribution:
distribution: arch
version: unknown
eventLogger: journald
freeLocks: 2045
hostname: treacle
idMappings:
gidmap:
- container_id: 0
host_id: 1008
size: 1
- container_id: 1
host_id: 624288
size: 65536
uidmap:
- container_id: 0
host_id: 1008
size: 1
- container_id: 1
host_id: 624288
size: 65536
kernel: 6.13.4-arch1-1
linkmode: dynamic
logDriver: journald
memFree: 2220281856
memTotal: 32666664960
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.14.0-1
path: /usr/lib/podman/aardvark-dns
version: aardvark-dns 1.14.0
package: netavark-1.14.0-1
path: /usr/lib/podman/netavark
version: netavark 1.14.0
ociRuntime:
name: crun
package: crun-1.20-1
path: /usr/bin/crun
version: |-
crun version 1.20
commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
rundir: /run/user/1008/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-2025_02_17.a1e48a0-1
version: ""
remoteSocket:
exists: true
path: /run/user/1008/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 7745626112
swapTotal: 8589930496
uptime: redacted
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries: {}
store:
configFile: /home/foobar/.config/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 2
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/foobar/.local/share/containers/storage
graphRootAllocated: 1011306463232
graphRootUsed: 359770730496
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /home/foobar/tmp
imageStore:
number: 6
runRoot: /run/user/1008/containers
transientStore: false
volumePath: /home/foobar/.local/share/containers/storage/volumes
version:
APIVersion: 5.4.1
Built: 1741727220
BuiltTime: Tue Mar 11 21:07:00 2025
GitCommit: b79bc8afe796cba51dd906270a7e1056ccdfcf9e
GoVersion: go1.24.1
Os: linux
OsArch: linux/amd64
Version: 5.4.1
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
$ systemctl --user cat --no-pager foobar-pod
# /run/user/1008/systemd/generator/foobar-pod.service
# Automatically generated by /usr/lib/systemd/user-generators/podman-user-generator
#
[X-Pod]
PodName=foobar
Network=pasta
[Install]
WantedBy=default.target
[Unit]
Wants=podman-user-wait-network-online.service
After=podman-user-wait-network-online.service
SourcePath=/home/foobar/.config/containers/systemd/foobar.pod
RequiresMountsFor=%t/containers
[Service]
SyslogIdentifier=%N
ExecStart=/usr/bin/podman pod start --pod-id-file=%t/%N.pod-id
ExecStop=/usr/bin/podman pod stop --pod-id-file=%t/%N.pod-id --ignore --time=10
ExecStopPost=/usr/bin/podman pod rm --pod-id-file=%t/%N.pod-id --ignore --force
ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile=%t/%N.pid --pod-id-file=%t/%N.pod-id --exit-policy=stop --replace --network pasta --infra-name foobar-infra --name foobar
Environment=PODMAN_SYSTEMD_UNIT=%n
Type=forking
Restart=on-failure
PIDFile=%t/%N.pid
Notice the --exit-policy=stop in the command.
Workaround:
[Pod]
PodmanArgs=--exit-policy=continue
The reason the exit policy is set is so the podman actually stops, otherwise the infra container runs forever so the pod unit would never deactivate which would be confusing.
Are you looking for --init-ctr functionality with quadlet or what is your issue with the exit policy?
I did want to make an init container at first, but I couldn't make it work with --init-ctr either as the pod was getting deleted (I don't have the test file around anymore but I could try to adapt the ones given above).
So, I'm using a fake init container. My use-case is to connect my pod's network to a WireGuard interface (which routes all the internet connections). It also setup some iptables rules.
It has a oneshot type:
# foobar-wireguard.container
[Service]
Type=oneshot
RemainAfterExit=yes
I added Requisite= and After= to my other container services so they are only started after the WireGuard tunnel is setup:
[Unit]
Requisite=foobar-wireguard.service
After=foobar-wireguard.service
I had to add this to my .pod file to make it work:
[Pod]
PodmanArgs=--exit-policy=continue
The issue can be transformed into:
- an immediate documentation issue that explains the exit policy set for pods
- a feature request that adds an
ExitPolicyconfiguration on the[Pod]block in.podfiles.
I'm willing to contribute to both.
Documentation for the default exit policy should be done regardless for sure.
As for your use case I think it would be nice for quadlet to support the init container concept natively. The exit policy shouldn't have to change but rather ignore the exit of the init container then.
One issue with podman is that --init-ctr only exist for podman create and not run which quadlet uses so we would need changes for podman as well not just the generator to make that work.
cc @ygalblum
The reason the exit policy is set is so the podman actually stops, otherwise the infra container runs forever so the pod unit would never deactivate which would be confusing.
This is the behavior of pods by default without quadlet. By the way if you run podman pod rm ... the systemd unit associated will exit cleanly.
Also, if you use the quadlet generator, you probably want long-living pods managed by systemd, that are not deleted unless you stop the associated unit.
And containers that are linked to a pod will get started by the pod too, as the generator adds Wants dependencies on the pod systemd unit definition.
I will probably have to open another issue, but the forced Wants dependency is also a problem.
I moved a setup using multiple networks to a pod. I had my maintenance containers being started due to the forced Wants dependency, so it started my database migration container, had to stop it and rename the .container file (to .bak).
Also you can't use multiple networks with pasta, it is not documented, although it says that multiple Network= is possible.
I had DNS resolution that stopped working using .network files (not pasta?), with netavark saying dns request failed: io error: Network is unreachable (os error 101), this is why I modified the setup to using a pod. I was using the host network before using .network files, which is less secure. I had the DNS resolution problem on an older Podman version too, there is maybe an open issue for it.
I think I had all the problems I could have when running Podman in production.
I even had an issue after running iptables-save with a running Podman container as root. The published port became unavailable after a reboot due to rules in the prerouting table that got saved and restored.
I will probably have to open another issue, but the forced Wants dependency is also a problem.
https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html#startwithpod
This is already solved.
Also you can't use multiple networks with pasta, it is not documented, although it says that multiple Network= is possible.
It is. The docs mention it is the same as --network and if you look there it says only multiple "user-defined" networks (what you see in podman network ls) are allowed
https://docs.podman.io/en/latest/markdown/podman-create.1.html
pasta is not a "user-defined" network it is a network mode and clearly listed as that so you cannot mix these things.
I had DNS resolution that stopped working using .network files (not pasta?), with netavark saying dns request failed: io error: Network is unreachable (os error 101), this is why I modified the setup to using a pod. I was using the host network before using .network files, which is less secure. I had the DNS resolution problem on an older Podman version too, there is maybe an open issue for it.
Without a reproducer there is nothing we can do about that.
I even had an issue after running iptables-save with a running Podman container as root. The published port became unavailable after a reboot due to rules in the prerouting table that got saved and restored.
Well that is just expected as podman inserts rules to make routing work as root for the containers. These rules should never be saved.
A friendly reminder that this issue had no activity for 30 days.
not stale
refer to https://github.com/containers/podman/issues/25596#issuecomment-2729214765
I've started getting dependency issues when restarting a lone service in a pod.
It seems like the pod shuts down as systemd is trying to restart the container, causing the restart to fail with the following:
A dependency job for container.service failed. See 'journalctl -xe' for detail
This is problematic, as podman-auto-update also triggers the issue, which means that any lone container within a pod will fail to come back online after an update.