Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[podman] death of one container restarts both #278

Open
tobwen opened this issue Jun 15, 2021 · 8 comments
Open

[podman] death of one container restarts both #278

tobwen opened this issue Jun 15, 2021 · 8 comments

Comments

@tobwen
Copy link

tobwen commented Jun 15, 2021

What's the issue?

When killing a process in a container in a pod (with more than one container), all the containers get restarted.

How to reproduce?

podman pod create --name systemd-pod
podman create --pod systemd-pod alpine top
podman create --pod systemd-pod alpine top
podman generate systemd --files --name systemd-pod --new
cp *.service $HOME/.config/systemd/user
systemctl --user daemon-reload
systemctl --user start pod-systemd-pod.service
pkill -U tobwen --newest 'top'

What's expected?

Only the dead container should be restarted.

Note

This only happens, if the pod is started by systemd. When it's directly started podman pod start ..., it's working as expected.

What's the environment?

podman version 3.3.0-dev
conmon version 2.0.30-dev

@haircommander
Copy link
Collaborator

hmm this sounds like a podman systemd generate issue. @vrothberg would you agree?

Independently, kubernetes supports restart policies of "never", "on-failure" and "always" for pods. I can't remember if podman supports such configurations, but it sounds like it is behaving as if it's using one of the latter two

@vrothberg
Copy link
Member

I concur, @haircommander. It sounds more like an issue in the dependencies among the container services inside the pod service.

@rhatdan
Copy link
Member

rhatdan commented Jun 18, 2021

@haircommander What do those states mean when you have multiple containers within the pod?

If I have two or more containers within a Pod, and one fails?

Never - Just let the other containers run, or should the entire pod stop?
On-Failure - Just restart the container, or restart all containers?
Always - ^^

@vrothberg
Copy link
Member

I think only the container should restart. Also containers that depend on this one should restart; pretty much following the dependency tree downwards to all leaves.

@haircommander
Copy link
Collaborator

https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy

it applies to each container separately. so a container stopping means it is restarted, but the whole rest of the pod isn't

@rhatdan
Copy link
Member

rhatdan commented Jun 22, 2021

Does anyone know if we implement this correctly in podman play kube?

@vrothberg
Copy link
Member

Does anyone know if we implement this correctly in podman play kube?

I don't know about play kube. I think this issue must be addressed in generate systemd.

@nivekuil
Copy link

I ran into this. I guess the easy fix is to change Requires= to Wants= in the pod service file, or else the pod dies when one container dies and since the containers BindTo the pod, they all die. Are there any caveats to that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants