NFS volume not mounted using podman-compose
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Although podman #4303 claims to have been fixed by podman #4305, nfs volumes are not mounted, at least not with podman-compose.
Steps to reproduce the issue:
- create
docker-compose.yamlwith following content:
version: '3'
services:
nginx:
image: nginxinc/nginx-unprivileged
container_name: nginxu
ports:
- 8888:8080
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- /docker_volumes/config/nginx:/config
- "nfs-data:/data"
restart: unless-stopped
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: nfsvers=4,addr=<my obfuscated IP>,rw
device: ":/mnt/seagate4tb/testnfs"
Note: it can be any image, I chose nginx-unprivileged, arbitrarily; the important lines are in the section
nfs-data.
- execute
podman-compose up -d
podman-compose up -d
['podman', '--version', '']
using podman version: 4.2.0
** excluding: set()
podman volume inspect root_nfs-data || podman volume create root_nfs-data
['podman', 'volume', 'inspect', 'root_nfs-data']
Error: inspecting object: no such volume root_nfs-data
['podman', 'volume', 'create', '--label', 'io.podman.compose.project=root', '--label', 'com.docker.compose.project=root', 'root_nfs-data']
['podman', 'volume', 'inspect', 'root_nfs-data']
['podman', 'network', 'exists', 'root_default']
podman run --name=nginxu -d --label io.podman.compose.config-hash=123 --label io.podman.compose.project=root --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=root --label com.docker.compose.project.working_dir=/root --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=nginx -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -v /docker_volumes/config/nginx:/config -v root_nfs-data:/data --net root_default --network-alias nginx -p 8888:8080 --restart unless-stopped nginxinc/nginx-unprivileged
e836a4f2c88aa4a0da5933a05109e6fd3999086943156421ad769526bd152267
exit code: 0
Note the line with error message:
Error: inspecting object: no such volume root_nfs-data
- Execute
mount | grep nfsto verify that nfs was mounted - or rather, that is was NOT mounted:
mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
Note that missing nfs mount.
Describe the results you received:
Volume root_nfs-data was created but the specified nfs volume was not mounted.
To verify that the nfs mount itself works outside of podman, I ran the following:
mount -t nfs <my hostname>:/mnt/seagate4tb/testnfs testnfs
ls testnfs
foo foo1 foobar foobaz
Also checked podman inspect root_nfs-data:
podman inspect root_nfs-data
[
{
"Name": "root_nfs-data",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/root_nfs-data/_data",
"CreatedAt": "2022-09-26T10:35:47.245872463-04:00",
"Labels": {
"com.docker.compose.project": "root",
"io.podman.compose.project": "root"
},
"Scope": "local",
"Options": {},
"UID": 101,
"GID": 101,
"MountCount": 0,
"NeedsCopyUp": true
}
]
Note that the nfs mount options from the
docker-compose.yamlfile are missing.
Describe the results you expected:
Same commands using docker-compose and docker inspect show how it should be:
docker inspect andrev_nfs-data
[
{
"CreatedAt": "2022-09-19T18:27:58+01:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "andrev",
"com.docker.compose.volume": "nfs-data"
},
"Mountpoint": "/var/lib/docker/volumes/andrev_nfs-data/_data",
"Name": "andrev_nfs-data",
"Options": {
"device": ":/mnt/seagate4tb/testnfs",
"o": "nfsvers=4,addr=<my obfuscated ip>,rw",
"type": "nfs"
},
"Scope": "local"
}
]
and to prove that the nfs mount occurred:
$ mount | grep nfs
:/mnt/seagate4tb/testnfs on /var/lib/docker/volumes/andrev_nfs-data/_data type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.178.215,local_lock=none,addr=192.168.178.220)
$ docker exec -it nginxu /bin/bash
nginx@6bb2044a7412:/$ ls /data
foo foo1 foobar foobaz
nginx@6bb2044a7412:/$
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
Refer above: 4.2.0
Output of podman info:
podman info
host:
arch: arm64
buildahVersion: 1.27.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- pids
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.4-2.fc36.aarch64
path: /usr/bin/conmon
version: 'conmon version 2.1.4, commit: '
cpuUtilization:
idlePercent: 98.35
systemPercent: 1.01
userPercent: 0.63
cpus: 4
distribution:
distribution: fedora
variant: server
version: "36"
eventLogger: journald
hostname: rpi8.fritz.box
idMappings:
gidmap: null
uidmap: null
kernel: 5.19.10-200.fc36.aarch64
linkmode: dynamic
logDriver: journald
memFree: 6598569984
memTotal: 8206974976
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.6-2.fc36.aarch64
path: /usr/bin/crun
version: |-
crun version 1.6
commit: 18cf2efbb8feb2b2f20e316520e0fd0b6c41ef4d
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 8206151680
swapTotal: 8206151680
uptime: 5h 14m 18.00s (Approximately 0.21 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 1
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 22340960256
graphRootUsed: 4157538304
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 1
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.2.0
Built: 1660228991
BuiltTime: Thu Aug 11 10:43:11 2022
GitCommit: ""
GoVersion: go1.18.4
Os: linux
OsArch: linux/arm64
Version: 4.2.0
Package info (e.g. output of rpm -q podman or apt list podman):
dnf list | grep podman
cockpit-podman.noarch 53-1.fc36 @updates
podman.aarch64 4:4.2.0-2.fc36 @updates
podman-compose.noarch 1.0.3-6.fc36 @updates
podman-gvproxy.aarch64 4:4.2.0-2.fc36 @updates
podman-plugins.aarch64 4:4.2.0-2.fc36 @updates
ansible-collection-containers-podman.noarch 1.9.4-1.fc36 updates
pcp-pmda-podman.aarch64 5.3.7-4.fc36 updates
podman-docker.noarch 4:4.2.0-2.fc36 updates
podman-remote.aarch64 4:4.2.0-2.fc36 updates
podman-tests.aarch64 4:4.2.0-2.fc36 updates
podman-tui.aarch64 0.5.0-2.fc36 updates
python3-molecule-podman.noarch 1.0.1-2.fc36 fedora
python3-podman.noarch 3:4.2.0-6.fc36 updates
python3-podman-api.noarch 0.0.0-0.12.20200614gitd4b8263.fc36 fedora ```
**Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)**
Yes
**Additional environment details (AWS, VirtualBox, physical, etc.):**
`
This also happens with type: cifs when i try to mount a samba share. I bashed into the container and did an ls to the location and it is empty.
@luigi311 @Blindfreddy Please look if you are using SELINUX or not, it could be the cause....
You need to add :z or :Z at the end of the volume attachment...
see more at https://github.com/containers/podman-compose/pull/574#issuecomment-1321946574
Hi. I'm experiencing similar issue with cifs volume While I create the volume by hand it works without a problem:
podman volume create cifs-gogs \
--driver local \
--opt type=cifs \
--opt device=//192.168.1.4/nexus-data \
--opt o=addr=192.168.1.4,username=xxx,password=xxx
[
{
"Name": "cifs-gogs",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/cifs-gogs/_data",
"CreatedAt": "2023-04-10T15:02:00.787095702+02:00",
"Labels": {},
"Scope": "local",
"Options": {
"device": "//192.168.1.4/nexus-data",
"o": "addr=192.168.1.4,username=xxx,password=xxx",
"type": "cifs"
},
"MountCount": 0,
"NeedsCopyUp": true,
"NeedsChown": true
}
]
But when I use podman-compose:
volumes:
cifs-gogs:
driver: local
driver_opts:
type: "cifs"
o: "addr=192.168.1.4,username=xxx,password=xxx"
device: "//192.168.1.4/nexus-data/"
I get
[
{
"Name": "gogs-compose_cifs-gogs",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/gogs-compose_cifs-gogs/_data",
"CreatedAt": "2023-04-10T15:02:44.622382416+02:00",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true
}
]
I've tried adding :z or :Z to mount point ex
volumes:
- cifs-gogs:/data:z
but none helped.
SELinux is set to Permissive
Hello everyone,
I have the same problem.
| Tools | version |
|---|---|
podman-compose |
1.0.3 |
podman |
4.2.0 |
When I create the volume with the command line it's fine but when I use podman-compose I get the same error as @Blindfreddy
Do you have any ideas?
Thank you :)
I'm not entirely sure of what's going on, but in chasing this problem today and looking at the comments above and other examples in search results, it seems like the issue might stem from podman-compose prepending the name of the directory that contains [docker|podman]-compose.y[a]ml to any named volumes.
So cifs-gogs is expanded to gogs-compose_cifs-gogs, which is not found when podman-compose checks the list of declared volumes (but I think these are also expanded in another search that does not share state, which might be the common bug), does not find a match, and assumes it must be anonymous, so it creates a new local volume, rather than failing.
The declared-but-uninitiated volume may be getting overlooked because nothing seems to reference it, so creating it would be a waste, which might be related to that other search I suspect is happening.
I've been able to get a number of systems online by externally declaring a volume with the expected name, like gogs-compose_cifs-gogs, then referencing it as cifs-gogs within the compose-file.
While this works, and it seems to make sense, since podman will want to create namespaces to avoid collisions and directories are sensible enough, I am unsure of whether it is brittle or not: will a fix for this bug result in broken volume-references?
As a speculative mitigation, I declared the expected volume (like cifs-gogs) within each compose-file, which might be a good idea for anyone who wants to pursue this as a workaround, but it's also just a guess.