conmon icon indicating copy to clipboard operation
conmon copied to clipboard

[podman] death of one container restarts both

Open tobwen opened this issue 3 years ago • 8 comments

What's the issue?

When killing a process in a container in a pod (with more than one container), all the containers get restarted.

How to reproduce?

podman pod create --name systemd-pod
podman create --pod systemd-pod alpine top
podman create --pod systemd-pod alpine top
podman generate systemd --files --name systemd-pod --new
cp *.service $HOME/.config/systemd/user
systemctl --user daemon-reload
systemctl --user start pod-systemd-pod.service
pkill -U tobwen --newest 'top'

What's expected?

Only the dead container should be restarted.

Note

This only happens, if the pod is started by systemd. When it's directly started podman pod start ..., it's working as expected.

What's the environment?

podman version 3.3.0-dev conmon version 2.0.30-dev

tobwen avatar Jun 15 '21 14:06 tobwen

hmm this sounds like a podman systemd generate issue. @vrothberg would you agree?

Independently, kubernetes supports restart policies of "never", "on-failure" and "always" for pods. I can't remember if podman supports such configurations, but it sounds like it is behaving as if it's using one of the latter two

haircommander avatar Jun 15 '21 15:06 haircommander

I concur, @haircommander. It sounds more like an issue in the dependencies among the container services inside the pod service.

vrothberg avatar Jun 17 '21 07:06 vrothberg

@haircommander What do those states mean when you have multiple containers within the pod?

If I have two or more containers within a Pod, and one fails?

Never - Just let the other containers run, or should the entire pod stop? On-Failure - Just restart the container, or restart all containers? Always - ^^

rhatdan avatar Jun 18 '21 14:06 rhatdan

I think only the container should restart. Also containers that depend on this one should restart; pretty much following the dependency tree downwards to all leaves.

vrothberg avatar Jun 21 '21 07:06 vrothberg

https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy

it applies to each container separately. so a container stopping means it is restarted, but the whole rest of the pod isn't

haircommander avatar Jun 21 '21 13:06 haircommander

Does anyone know if we implement this correctly in podman play kube?

rhatdan avatar Jun 22 '21 13:06 rhatdan

Does anyone know if we implement this correctly in podman play kube?

I don't know about play kube. I think this issue must be addressed in generate systemd.

vrothberg avatar Jun 22 '21 13:06 vrothberg

I ran into this. I guess the easy fix is to change Requires= to Wants= in the pod service file, or else the pod dies when one container dies and since the containers BindTo the pod, they all die. Are there any caveats to that?

nivekuil avatar Aug 15 '21 21:08 nivekuil