[BUG] mount nfs volume in docker-compose.yml issue
Description
It might be a bug since i cannot find solution online.
Mounting nfs share directly to a container works when using docker run but not when using a compose file with docker compose.
Steps To Reproduce
- Debian bookworm
- I have the following
docker-compose.ymlwhich mounts a nfs share volume directly in the container:
name: hello
services:
hello:
image: alpine:latest # Example image, replace with your desired image
command: tail -f /dev/null # Keeps the container running
volumes:
- my_nfs_volume:/mnt/nfs # Mount the NFS share to /mnt/nfs in the container
volumes:
my_nfs_volume:
driver: local
driver_opts:
type: "nfs"
o: "addr=192.168.1.100,rw,nfsvers=4.1" # NFS options: rw, nolock, and soft mount
device: ":/immich" # NFS server IP and share path
-
doing
docker compose up -dgives:Error response from daemon: error while mounting volume '/var/lib/docker/volumes/hello_my_nfs_volume/_data': failed to mount local volume: mount :/immich:/var/lib/docker/volumes/hello_my_nfs_volume/_data, data: addr=192.168.1.100,nolock,soft: permission denied -
However, not using the compose file and running directly
docker run -it --name try --mount 'type=volume,source=nfs_try,target=/mnt/nfs,volume-driver=local,volume-opt=type=nfs,"volume-opt=o=addr=192.168.1.100,rw,nfsvers=4.1",volume-opt=device=:/immich' alpine:latestworks fine and i can access the files in the share from the container shell.
Compose Version
$ docker compose version
Docker Compose version v2.29.7
Docker Environment
$ docker info
Client: Docker Engine - Community
Version: 27.3.1
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.17.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.7
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 14
Running: 6
Paused: 0
Stopped: 8
Images: 10
Server Version: 27.3.1
Storage Driver: btrfs
Btrfs:
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
runc version: v1.1.14-0-g2c9f560
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.0-26-amd64
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.41GiB
Name: [hostname]
ID: 39ef1582-be9b-4190-805c-3ddf8bb44e79
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Anything else?
No response
Your docker run command isn't equivalent to docker compose up as it uses the --mount flag.
Can you please try to use docker volume create ... to create the NFS volume, then use docker run -v <volume>:<target>
compose equivalent would be:
(...)
volumes:
- source: ./web/dist
target: /usr/share/nginx/html
volume:
<driver and options> # this is not supported by the spec, seems nobody asked for it 😅
Creating the volume using docker volume create then mounting it to a container does work, I can get a shell into the container and navigate the files on the nfs.
The thing that does not work is creating the volume inside the docker-compose.yml.
So what is the catch?
Update:
after having created the volume with docker volume create to test what you suggested the dockerfile that I have provided got up and mounted the share without the error. Fair I said so I down the compose file and removed the volume that I have manually created with docker volume rm ... and tried again. Now when I do docker compose up -d the container and the mount get up and running without errors even after a host reboot.
It seemed strange to me that now it works in this smaller reproducible case that I provided here, so I tried again on my original problem which was a Immich compose file where the nfs volume is called nfs-immich.
- I removed all the volumes created in the before steps.
- In the immich directory I did
docker compose up -dand got the same error I stated at the beginning. - I created the volume manually using
docker volume create --driver local -o type=nfs -o o=addr="192.168.1.100,rw,nfsvers=4.1" -o device=:/immich immich_nfs-immichsince later docker will look for<compose name>_<volume name>. docker compose up -dgives
/docker/immich$ docker compose up -d
[+] Running 0/1
⠙ Network immich_default Creating 0.2s
[+] Running 5/5me "immich_nfs-immich" already exists but was not created by Docker Compose. ✔ Network immich_default Created 0.2s
✔ Container immich_postgres Started 0.6s
✔ Container immich_machine_learning Started 0.7s
✔ Container immich_redis Started 0.7s
✔ Container immich_server Started
So it started without issues and the files are present in /var/lib/docker/volumes/immich_nfs-immich/_data
- I took the compose down, removed the volume
immich_nfs-immichand checked that is not present anymore indocker volume ls - Now if I do
docker compose up -dall services get up and the volume gets created without the precedent error.
Seems like that if the volume has been created once docker compose will be able to create it again automatically, even if you remove it. Maybe it is a problem of permissions? I'm using only the user admin which is in the docker group, never used the user root or sudo.
Thanks for this detailed description, I'll try to reproduce and investigate
I checked docker compose up sends the exact same API payload to VolumeCreate engine API vs docker volume create....
I can't find any way to explain this weird behavior 😢
maybe a race condition ?
could you try running docker volume create (...) && docker run -v ... so there's no delay between those two commands
I had similar problem. I also noticed that if you delete a volume with the command "volume rm" the data still exists on the nfs server. Is this normal and correct behavior?
Is this normal and correct behavior?
Yes it is, volume is created as a local volume (just a plain folder on docker host) and configured as a NFS mont. Volume removal unmount NFS and removes the folder, but has no impact on the NFS server
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I ran into a similar issue but solved it by setting the NFS "Operation mode" on my NAS to "kernel mode" instead of "user mode", which is the default.