Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quadlet does not remove anonymous volumes on service stop #20070

Closed
alaviss opened this issue Sep 20, 2023 · 5 comments · Fixed by #20085
Closed

Quadlet does not remove anonymous volumes on service stop #20070

alaviss opened this issue Sep 20, 2023 · 5 comments · Fixed by #20085
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@alaviss
Copy link

alaviss commented Sep 20, 2023

Issue Description

Services generated by Quadlet does not remove their anonymous volumes after stopping.

This is due to the ExecStop command not having the -v flag set.

Steps to reproduce the issue

  1. Add sleep.container to quadlet
[Unit]
Description=A do-nothing container
[Container]
Image=docker.io/alpine:latest
Network=none
RunInit=yes
Exec=sleep infinity
Volume=/data
  1. Start sleep.service
  2. Run podman volume list and confirm that an anonymous volume was created
  3. Stop sleep.service
  4. Run podman volume list

Describe the results you received

The anonymous volume should be gone

Describe the results you expected

The anonymous volume is still there.

Manual pruning with podman volume prune is possible.

podman info output

host:
  arch: amd64
  buildahVersion: 1.31.2
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 94.19
    systemPercent: 2.28
    userPercent: 3.54
  cpus: 4
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: iot
    version: "38"
  eventLogger: journald
  freeLocks: 2028
  hostname: server-01.myth-bluegill.ts.net
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.4.15-200.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 2574450688
  memTotal: 8223969280
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-1.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.9-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.9
      commit: a538ac4ea1ff319bcfe2bf81cb5c6f687e2dc9d3
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.fc38.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8222928896
  swapTotal: 8222928896
  uptime: 1h 35m 2.00s (Approximately 0.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 13
    paused: 0
    running: 13
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 68702699520
  graphRootUsed: 8922685440
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 13
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.6.2
  Built: 1694549242
  BuiltTime: Tue Sep 12 15:07:22 2023
  GitCommit: ""
  GoVersion: go1.20.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.2

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

No response

@alaviss alaviss added the kind/bug Categorizes issue or PR as related to a bug. label Sep 20, 2023
@rhatdan
Copy link
Member

rhatdan commented Sep 21, 2023

Looks like the cleanup process did no succeed, perhaps because the sleep process did not receive signals? Or systemd shot the conmon before it executed cleanup.

@rhatdan
Copy link
Member

rhatdan commented Sep 21, 2023

Not sure what this means:
This is due to the ExecStop command not having the -v flag set.

@alaviss
Copy link
Author

alaviss commented Sep 21, 2023

You need -v for podman rm to remove volumes. Whether that should also be the case with --rm in podman run then I don't know.

systemd runs to ExecStop before trying to send signals, which worked but didn't remove the volumes.

You can repro it on the CLI too:

$ podman volume list
DRIVER      VOLUME NAME

$ podman run -d --rm -v /data --init docker.io/alpine sleep infinity
a54e7d89e14cab6cd9a49b60c6d64ef407af1cc7b727d8e186c6c8b776fa2152

$ podman volume list
DRIVER      VOLUME NAME
local       dfd0647552a1ab6569694b359a082dde4dfb94e513e5c56d4b3178927b6baaf5

$ podman rm -f a54e7d89e14cab6cd9a49b60c6d64ef407af1cc7b727d8e186c6c8b776fa2152
a54e7d89e14cab6cd9a49b60c6d64ef407af1cc7b727d8e186c6c8b776fa2152

$ podman ps -a
CONTAINER ID  IMAGE                                         COMMAND               CREATED       STATUS      PORTS       NAMES

$ podman volume list
DRIVER      VOLUME NAME
local       dfd0647552a1ab6569694b359a082dde4dfb94e513e5c56d4b3178927b6baaf5

$ podman volume prune
WARNING! This will remove all volumes not used by at least one container. The following volumes will be removed:
dfd0647552a1ab6569694b359a082dde4dfb94e513e5c56d4b3178927b6baaf5
Are you sure you want to continue? [y/N]

@rhatdan
Copy link
Member

rhatdan commented Sep 21, 2023

Yes now I understand, I will have a pull request up tomorrow to fix this.

rhatdan added a commit to rhatdan/podman that referenced this issue Sep 21, 2023
If you are running a quadlet with anonymous volumes, then the volume
will leak ever time you restart the service.  This change will
cause the volume to be removed.

Fixes: containers#20070

Signed-off-by: Daniel J Walsh <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented Sep 21, 2023

This is basically a race condition between the --rm on the run command and the podman rm command, to see who wipes out the container first. If the podman run --rm happens first, then the anonymous volume will be removed. Adding the -v option to podman rm is the correct thing to do. You could argue that -v on podman rm should default to true.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Dec 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants