docker-nfs-server
docker-nfs-server copied to clipboard
"mount(2): Operation not permitted" in plain docker installation
I'm getting mount(2): Operation not permitted
when I try to mount the nfs-share.
I've adapted apparmor and added cap_sys_admin
for my current user (Which you mentioned in the linked issue).
Since I only have a very limited idea what this whole capability-thing is, I've followed some stackoverflow-questions and added cap_sys_admin benke
in /etc/security/capability.conf as well as putting auth optional pam_cap.so
in /etc/pam.d/su
(Although, while it seems to have worked, I guess this is probably not the right place, as I don't understand how su
comes into this). In any case, after adding these changes, capsh --print
for the user running the docker-container contains cap_sys_admin+i
in Current
:
Current: = cap_sys_admin+i
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=1000(benke)
gid=1000(benke)
groups=4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare),133(docker),1000(benke),1001(fuse)
However, this didn't fix the issue, nothing has changed. I hope you can help me out here, as I'm in the dark how this is supposed to work.
This is the full debug-output when trying to mount
sudo mount -v workbench.local:/ /media/nfs/ -v
mount.nfs: timeout set for Wed Apr 1 13:46:24 2020
mount.nfs: trying text-based options 'vers=4.2,addr=127.0.11.20,clientaddr=127.0.0.1'
mount.nfs: mount(2): Operation not permitted
mount.nfs: trying text-based options 'addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported
Here's the server output:
benke@id92 ~workbench/nfs $ docker-compose up nfs-server
Starting workbench_nfs-server_1 ... done
Attaching to workbench_nfs-server_1
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> kernel module nfs is missing
nfs-server_1 | ----> attempting to load kernel module nfs
nfs-server_1 | ----> kernel module nfsd is missing
nfs-server_1 | ----> attempting to load kernel module nfsd
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> starting rpcbind
nfs-server_1 | ----> starting exportfs
nfs-server_1 | ----> starting rpc.mountd on port 32767
nfs-server_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs-server_1 | ----> terminating rpcbind
nfs-server_1 | ----> all services started normally
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SERVER STARTUP COMPLETE
nfs-server_1 | ==================================================================
nfs-server_1 | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4
nfs-server_1 | ----> list of container exports:
nfs-server_1 | ----> /export *(rw,fsid=0,no_subtree_check,sync)
nfs-server_1 | ----> /export/debian *(rw,nohide,insecure,no_subtree_check,sync)
nfs-server_1 | ----> list of container ports that should be exposed: 2049 (TCP)
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs-server_1 | ==================================================================
And this is my docker-compose.yml
nfs-server:
image: erichough/nfs-server
ports:
- 127.0.11.20:2049:2049
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
environment:
NFS_VERSION: 4.2
NFS_DISABLE_VERSION_3: 1
security_opt:
- apparmor=erichough-nfs
After adding insecure
to the root-folder in exports.txt and adding a port-mapping for port 111 and 32767, it appears two work with different NFS versions. Looks like it is a permission-issue when using v4, and not a kernel-issue as I assumed based on the answer in #20.
Here's my adapted export.txt:
/export *(rw,fsid=0,insecure,no_subtree_check,sync)
/export/debian *(rw,nohide,insecure,no_subtree_check,sync)
and my adapted docker-compose.yml (I've only added portmappings and got rid of the version restrictions):
nfs-server:
image: erichough/nfs-server
ports:
- 127.0.11.20:2049:2049
- 127.0.11.20:111:111
- 127.0.11.20:32767:32767
- 127.0.11.20:32765:32765
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
security_opt:
- apparmor=erichough-nfs
and this is the output for different client-versions, depending on which mountpoint I try:
sudo mount -v workbench.local:/ /media/mynfs/
mount.nfs: timeout set for Thu Apr 2 13:16:52 2020
mount.nfs: trying text-based options 'vers=4.2,addr=127.0.11.20,clientaddr=127.0.0.1'
when I use /export
instead:
sudo mount -v workbench.local:/export /media/mynfs/
mount.nfs: timeout set for Thu Apr 2 13:20:53 2020
mount.nfs: trying text-based options 'vers=4.2,addr=127.0.11.20,clientaddr=127.0.0.1'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 127.0.11.20 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 127.0.11.20 prog 100005 vers 3 prot TCP port 32767
In conclusion, permission checks of NFS v4 are more strict and I hadn't mapped port 111 to be able to use NFS v3. For v4 I assume I'll have to add user id mapping to make it work without the insecure-flag.
Unfortunately I have been getting nowhere today, trying to get this working with AUTH_SYS and idmapd.
I can only get it to mount with the insecure
-flag (Despite the fact that my client is using ports 686 and 979 to connect to the server, according to wireshark).
Some guidance would be greatly appreciated :-/
I've added idmapd.conf, I've created users on the server with the same UID/GID and name as the client, but I still get Operation not permitted
when attempting to mount with NFSv4.
exportfs -v
on the server says:
$ exportfs -v
/export <world> (sync,wdelay,hide,no_subtree_check,fsid=0,sec=sys,rw,secure,no_root_squash,no_all_squash)
/sys/module/nfs/parameters/nfs4_disable_idmapping
is set to N
on both the server and the client
Here's my imapd.conf:
[General]
Verbosity=7
Domain=workbench.local
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
[Mapping]
Nobody-User=nobody
Nobody-Group=nobody
[Translation]
Method=nsswitch
Here's the server's DEBUG log:
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> log level set to DEBUG
nfs-server_1 | ----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
nfs-server_1 | ----> /etc/exports is bind-mounted
nfs-server_1 | ----> kernel module nfs is loaded
nfs-server_1 | ----> kernel module nfsd is loaded
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
nfs-server_1 | ----> mounting nfsd filesystem onto /proc/fs/nfsd
nfs-server_1 | mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
nfs-server_1 | ----> starting rpcbind
nfs-server_1 | ----> starting exportfs
nfs-server_1 | exporting *:/export
nfs-server_1 | ----> starting rpc.mountd on port 32767
nfs-server_1 | ----> starting rpc.statd on port 32765 (outgoing from port 32766)
nfs-server_1 | ----> starting rpc.idmapd
nfs-server_1 | rpc.idmapd: Setting log level to 10
nfs-server_1 |
nfs-server_1 | rpc.idmapd: libnfsidmap: using domain: workbench.local
nfs-server_1 | rpc.idmapd: libnfsidmap: Realms list: 'WORKBENCH.LOCAL'
nfs-server_1 | rpc.idmapd: libnfsidmap: processing 'Method' list
nfs-server_1 | rpc.idmapd: libnfsidmap: loaded plugin /usr/lib/libnfsidmap/nsswitch.so for method nsswitch
nfs-server_1 | rpc.idmapd: Expiration time is 600 seconds.
nfs-server_1 | rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel
nfs-server_1 | rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel
nfs-server_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs-server_1 | rpc.nfsd: knfsd is currently down
nfs-server_1 | rpc.nfsd: Writing version string to kernel: -2 +3 +4 +4.1 +4.2
nfs-server_1 | rpc.nfsd: Created AF_INET TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET UDP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 UDP socket.
nfs-server_1 | ----> all services started normally
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SERVER STARTUP COMPLETE
nfs-server_1 | ==================================================================
nfs-server_1 | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
nfs-server_1 | ----> list of container exports:
nfs-server_1 | ----> /export *(rw,sync,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,no_root_squash,no_all_squash)
nfs-server_1 | ----> list of container ports that should be exposed:
nfs-server_1 | ----> 111 (TCP and UDP)
nfs-server_1 | ----> 2049 (TCP and UDP)
nfs-server_1 | ----> 32765 (TCP and UDP)
nfs-server_1 | ----> 32767 (TCP and UDP)
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs-server_1 | ==================================================================
nfs-server_1 | rpc.statd: Version 2.3.4 starting
nfs-server_1 | rpc.statd: Flags: No-Daemon Log-STDERR TI-RPC
nfs-server_1 | rpc.statd: Local NSM state number: 3
nfs-server_1 | rpc.statd: Failed to open /proc/sys/fs/nfs/nsm_local_state: Read-only file system
nfs-server_1 | rpc.statd: Running as root. chown /var/lib/nfs to choose different user
nfs-server_1 | rpc.statd: Waiting for client connections
Taking a look now. It'll take me a moment to digest your issue. Thanks for posting your debug logs - that's super helpful! Stand by.
Couple of clarifying questions:
- Does it matter to you if you use NFSv3 or NFSv4? The performance difference, in my experience, is negligible. NFSv3 tends to be easier to get working, but NFSv4 has better security capabilities. If you don't care either way, I'd say set
NFS_VERSION=3
for now, just so we can narrow down the issue. - You mentioned AppArmor. Is AppArmor active on your Docker host? If so, temporarily disabling it would us troubleshoot.
- Is your client (where you're running
mount ...
) running inside a container? Or on your host machine? Or another machine? - What are the ownership and permissions of
data/nfs-export
? i.e.ls -al ./data/nfs-export
For v4 I assume I'll have to add user id mapping to make it work without the insecure-flag.
NFSv4 doesn't actually require idmapd
, so for the time being I would suggest putting it aside.
Here's an updated docker-compose.yml
that uses NFSv3 only, enables debug logging, exposes the required ports, and ditches AppArmor (assuming it's disabled on the host). Could you try something like this and post debug logs of both your client and server?
nfs-server:
image: erichough/nfs-server
ports:
- 2049:2049
- 2049:2049/udp
- 111:111
- 111:111/udp
- 32765:32765
- 32765:32765/udp
- 32767:32767
- 32767:32767/udp
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
environment:
NFS_VERSION: 3
NFS_LOG_LEVEL: DEBUG
1. Does it matter to you if you use NFSv3 or NFSv4?
That's fine for now.
2. You mentioned AppArmor. Is AppArmor active on your Docker host?
Yes, it's my Ubuntu workstation. I've stopped and disabled AppArmor now (And rebooted to make sure nothing is loaded).
3. Is your client (where you're running `mount ...`) running inside a container?
No, it's on the host machine.
4. What are the ownership and permissions of `data/nfs-export`? i.e. `ls -al ./data/nfs-export`
drwxr-xr-x 1 root root 52 Apr 3 14:37 nfs-export
I've adapted docker-compose similar to the one you've posted:
nfs-server:
image: erichough/nfs-server
ports:
- 127.0.11.20:2049:2049
- 127.0.11.20:2049:2049/udp
- 127.0.11.20:111:111
- 127.0.11.20:32767:32767
- 127.0.11.20:32767:32767/udp
- 127.0.11.20:32765:32765
- 127.0.11.20:32765:32765/udp
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
environment:
NFS_VERSION: 3
NFS_LOG_LEVEL: DEBUG
and the server is not starting now:
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | mount: mounting rpc_pipefs on /var/lib/nfs/rpc_pipefs failed: Permission denied
nfs-server_1 | ---->
nfs-server_1 | ----> ERROR: unable to mount rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | ---->
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | TERMINATING ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> terminating nfsd
nfs-server_1 | ----> WARNING: unable to terminate nfsd. if it had started already, check Docker host for lingering [nfsd] processes
nfs-server_1 | ----> rpc.statd was not running
nfs-server_1 | ----> rpc.mountd was not running
nfs-server_1 | ----> un-exporting filesystem(s)
nfs-server_1 | ----> rpcbind was not running
nfs-server_1 | ----> no active mount at /proc/fs/nfsd
nfs-server_1 | ----> no active mount at /var/lib/nfs/rpc_pipefs
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | TERMINATED
nfs-server_1 | ==================================================================
When I had this issue yesterday, I went through the apparmor-config to get it running. But apparmor is definitely disabled now:
$ service apparmor status
● apparmor.service - AppArmor initialization
Loaded: loaded (/lib/systemd/system/apparmor.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:apparmor(7)
http://wiki.apparmor.net/
Hmm, that error message is certainly the one that usually accompanies AppArmor interference. But what's also weird is that even though NFS_LOG_LEVEL
is set to DEBUG
, we're not seeing the usual debug output. Unless you redacted the logging output?
AppArmor is clearly disabled on your host. I'm starting to wonder if this is an as-of-yet undiscovered bug in this image. Can you think of any other security controls on your system that might be interfering with mount
capabilities?
I would still like to see the output of your client-side mount
command with the "simplified" docker-compose.yml
that you posted in your last message. That would give us insight into what happens with a proper NFSv3-only setup.
Do you have a secondary, independent host on which you could easily test? That might be useful in helping to give us an idea of where the problem lies.
Thanks for your patience in figuring this out. As long as you are, I'm happy to keep digging to get at the root of the problem. I think we'll get it solved!
I would still like to see the output of your client-side
mount
command with the "simplified"docker-compose.yml
that you posted in your last message. That would give us insight into what happens with a proper NFSv3-only setup.
Well the server is not starting with this config, so no mounting :-/
The host machine is a vanilla Ubuntu 18.04.4, no SELinux installed as far as I know. I have another pure Debian machine, I will configure it for docker and test the setup there, will post when I've done that (~~It's a i686 though, so quite different kernelwise~~ There's no docker for 32bit, should have thought of that).
FYI: This the mount-output on my workstation (The Ubuntu-box) when I just add the apparmor-stuff to the simplified config, not sure if this helps:
$ sudo mount -v workbench.local:/export /media/iPhone/
mount.nfs: timeout set for Sun Apr 5 07:36:31 2020
mount.nfs: trying text-based options 'vers=4.2,addr=127.0.11.20,clientaddr=127.0.0.1'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.1,addr=127.0.11.20,clientaddr=127.0.0.1'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'vers=4.0,addr=127.0.11.20,clientaddr=127.0.0.1'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 127.0.11.20 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 127.0.11.20 prog 100005 vers 3 prot TCP port 32767
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting workbench.local:/export
This is the accompanying docker-compose.yml
:
nfs-server:
image: erichough/nfs-server
ports:
- 127.0.11.20:2049:2049
- 127.0.11.20:2049:2049/udp
- 127.0.11.20:111:111
- 127.0.11.20:32767:32767
- 127.0.11.20:32767:32767/udp
- 127.0.11.20:32765:32765
- 127.0.11.20:32765:32765/udp
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
environment:
NFS_VERSION: 3
security_opt:
- apparmor=erichough-nfs
And I've loaded the apparmor-profile:
#include <tunables/global>
profile erichough-nfs flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>
mount fstype=nfs*,
mount fstype=rpc_pipefs,
}
What's the output of:
rpcinfo 127.0.11.20
and (if you have netcat installed)
nc -z 127.0.11.20 2049 && echo open || echo closed
I'm just wondering if we're dealing with a networking issue.
These commands look good, having started above setup with apparmor on and the 'erichough-nfs'-profile loaded
$ rpcinfo 127.0.11.20
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /var/run/rpcbind.sock portmapper superuser
100000 3 local /var/run/rpcbind.sock portmapper superuser
100005 3 udp 0.0.0.0.127.255 mountd superuser
100005 3 tcp 0.0.0.0.127.255 mountd superuser
100005 3 udp6 ::.127.255 mountd superuser
100005 3 tcp6 ::.127.255 mountd superuser
100024 1 udp 0.0.0.0.127.253 status superuser
100024 1 tcp 0.0.0.0.127.253 status superuser
100024 1 udp6 ::.127.253 status superuser
100024 1 tcp6 ::.127.253 status superuser
100003 3 tcp 0.0.0.0.8.1 nfs superuser
100227 3 tcp 0.0.0.0.8.1 - superuser
100003 3 udp 0.0.0.0.8.1 nfs superuser
100227 3 udp 0.0.0.0.8.1 - superuser
100003 3 tcp6 ::.8.1 nfs superuser
100227 3 tcp6 ::.8.1 - superuser
100003 3 udp6 ::.8.1 nfs superuser
100227 3 udp6 ::.8.1 - superuser
100021 1 udp 0.0.0.0.143.146 nlockmgr superuser
100021 3 udp 0.0.0.0.143.146 nlockmgr superuser
100021 4 udp 0.0.0.0.143.146 nlockmgr superuser
100021 1 tcp 0.0.0.0.168.45 nlockmgr superuser
100021 3 tcp 0.0.0.0.168.45 nlockmgr superuser
100021 4 tcp 0.0.0.0.168.45 nlockmgr superuser
100021 1 udp6 ::.176.223 nlockmgr superuser
100021 3 udp6 ::.176.223 nlockmgr superuser
100021 4 udp6 ::.176.223 nlockmgr superuser
100021 1 tcp6 ::.154.175 nlockmgr superuser
100021 3 tcp6 ::.154.175 nlockmgr superuser
100021 4 tcp6 ::.154.175 nlockmgr superuser
$ nc -z 127.0.11.20 2049 && echo open || echo closed
open
Sorry I've not been able to try it on a different box yet. The 32bit-box I mentioned doesn't have virtualization-capabilities/no docker, and the servers I have access to are all virtualized instances. My laptop has the same setup, Ubuntu 18.04.4 and same kernel as my workstation, doubt there will be much difference, but I'll try today to see if it's an arbitrary issue with my workstation.
Hmm, that error message is certainly the one that usually accompanies AppArmor interference. But what's also weird is that even though
NFS_LOG_LEVEL
is set toDEBUG
, we're not seeing the usual debug output. Unless you redacted the logging output?
I didn't redact the log but I lost the debug-flag on the way, sorry about that! :-/ I also setup and tried it on my laptop in the meanwhile, but there response is exactly the same. Here's the full server output with disabled apparmor and debugging enabled:
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> log level set to DEBUG
nfs-server_1 | ----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
nfs-server_1 | ----> /etc/exports is bind-mounted
nfs-server_1 | ----> kernel module nfs is missing
nfs-server_1 | ----> attempting to load kernel module nfs
nfs-server_1 | ----> kernel module nfs is loaded
nfs-server_1 | ----> kernel module nfsd is missing
nfs-server_1 | ----> attempting to load kernel module nfsd
nfs-server_1 | ----> kernel module nfsd is loaded
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):-1: Permission denied
nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008001,'(null)'):-1: Permission denied
nfs-server_1 | mount: mounting rpc_pipefs on /var/lib/nfs/rpc_pipefs failed: Permission denied
nfs-server_1 | ---->
nfs-server_1 | ----> ERROR: unable to mount rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | ---->
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | TERMINATING ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> terminating nfsd
nfs-server_1 | ----> WARNING: unable to terminate nfsd. if it had started already, check Docker host for lingering [nfsd] processes
nfs-server_1 | ----> rpc.statd was not running
nfs-server_1 | ----> rpc.mountd was not running
nfs-server_1 | ----> un-exporting filesystem(s)
nfs-server_1 | ----> rpcbind was not running
nfs-server_1 | ----> no active mount at /proc/fs/nfsd
nfs-server_1 | ----> no active mount at /var/lib/nfs/rpc_pipefs
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | TERMINATED
nfs-server_1 | ==================================================================
and here with apparmor & profile loaded:
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> log level set to DEBUG
nfs-server_1 | ----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
nfs-server_1 | ----> /etc/exports is bind-mounted
nfs-server_1 | ----> kernel module nfs is loaded
nfs-server_1 | ----> kernel module nfsd is loaded
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
nfs-server_1 | ----> mounting nfsd filesystem onto /proc/fs/nfsd
nfs-server_1 | mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
nfs-server_1 | ----> starting rpcbind
nfs-server_1 | ----> starting exportfs
nfs-server_1 | exporting *:/export
nfs-server_1 | ----> starting rpc.mountd on port 32767
nfs-server_1 | ----> starting rpc.statd on port 32765 (outgoing from port 32766)
nfs-server_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs-server_1 | rpc.nfsd: knfsd is currently down
nfs-server_1 | rpc.nfsd: Writing version string to kernel: -2 +3 -4 -4.1 -4.2
nfs-server_1 | rpc.nfsd: Created AF_INET TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET UDP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 UDP socket.
nfs-server_1 | ----> all services started normally
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SERVER STARTUP COMPLETE
nfs-server_1 | ==================================================================
nfs-server_1 | ----> list of enabled NFS protocol versions: 3
nfs-server_1 | ----> list of container exports:
nfs-server_1 | ----> /export *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
nfs-server_1 | ----> list of container ports that should be exposed:
nfs-server_1 | ----> 111 (TCP and UDP)
nfs-server_1 | ----> 2049 (TCP and UDP)
nfs-server_1 | ----> 32765 (TCP and UDP)
nfs-server_1 | ----> 32767 (TCP and UDP)
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs-server_1 | ==================================================================
nfs-server_1 | rpc.statd: Version 2.3.4 starting
nfs-server_1 | rpc.statd: Flags: No-Daemon Log-STDERR TI-RPC
nfs-server_1 | rpc.statd: Failed to read /var/lib/nfs/state: Address in use
nfs-server_1 | rpc.statd: Initializing NSM state
nfs-server_1 | rpc.statd: Local NSM state number: 3
nfs-server_1 | rpc.statd: Failed to open /proc/sys/fs/nfs/nsm_local_state: Read-only file system
nfs-server_1 | rpc.statd: Running as root. chown /var/lib/nfs to choose different user
nfs-server_1 | rpc.statd: Waiting for client connections
And here's the client-log for v3-only:
$ sudo mount -t nfs -o vers=3 workbench.local:/export /media/mynfsmount/ -v
mount.nfs: timeout set for Tue Apr 7 09:46:12 2020
mount.nfs: trying text-based options 'vers=3,addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 127.0.11.20 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 127.0.11.20 prog 100005 vers 3 prot TCP port 32767
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting workbench.local:/export
So I have now installed Debian Buster on a second computer. The results are the same as far as I can tell. I can't run it without AppArmor either btw ("mount rpc_pipefs permission denied").
Weirdly, I also had to map the docker-image to port 112, as Debian insists on using rpc-statd to start the client, so port 111 is occupied by rpcbind.
With the insecure
-flag in the exports-file, I can get it to work with NFSv4 but not NFSv3 (So there's a little difference here).
Here's the nfs-server output on the Debian machine (With the docker-compose.yml you posted above - NFSv3 only, but apparmor and profile loaded. Without apparmor it fails with the "mount rpc_pipefs permission denied"-error, like on my other machine):
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> log level set to DEBUG
nfs-server_1 | ----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
nfs-server_1 | ----> /etc/exports is bind-mounted
nfs-server_1 | ----> kernel module nfs is loaded
nfs-server_1 | ----> kernel module nfsd is loaded
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
nfs-server_1 | ----> mounting nfsd filesystem onto /proc/fs/nfsd
nfs-server_1 | mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
nfs-server_1 | ----> starting rpcbind
nfs-server_1 | ----> starting exportfs
nfs-server_1 | exporting *:/export
nfs-server_1 | ----> starting rpc.mountd on port 32767
nfs-server_1 | ----> starting rpc.statd on port 32765 (outgoing from port 32766)
nfs-server_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs-server_1 | rpc.nfsd: knfsd is currently down
nfs-server_1 | rpc.nfsd: Writing version string to kernel: -2 +3 -4 -4.1 -4.2
nfs-server_1 | rpc.nfsd: Created AF_INET TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET UDP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 UDP socket.
nfs-server_1 | ----> all services started normally
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SERVER STARTUP COMPLETE
nfs-server_1 | ==================================================================
nfs-server_1 | ----> list of enabled NFS protocol versions: 3
nfs-server_1 | ----> list of container exports:
nfs-server_1 | ----> /export *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
nfs-server_1 | ----> list of container ports that should be exposed:
nfs-server_1 | ----> 111 (TCP and UDP)
nfs-server_1 | ----> 2049 (TCP and UDP)
nfs-server_1 | ----> 32765 (TCP and UDP)
nfs-server_1 | ----> 32767 (TCP and UDP)
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs-server_1 | ==================================================================
nfs-server_1 | rpc.statd: Version 2.3.4 starting
nfs-server_1 | rpc.statd: Flags: No-Daemon Log-STDERR TI-RPC
nfs-server_1 | rpc.statd: Failed to read /var/lib/nfs/state: Address in use
nfs-server_1 | rpc.statd: Initializing NSM state
nfs-server_1 | rpc.statd: Local NSM state number: 3
nfs-server_1 | rpc.statd: Failed to open /proc/sys/fs/nfs/nsm_local_state: Read-only file system
nfs-server_1 | rpc.statd: Running as root. chown /var/lib/nfs to choose different user
nfs-server_1 | rpc.statd: Waiting for client connections
Here's the mount-log:
$ sudo mount -t nfs -o vers=3,port=112 127.0.11.20:/exports /media/cdrom/ -v
mount.nfs: timeout set for Tue Apr 7 13:37:30 2020
mount.nfs: trying text-based options 'vers=3,port=112,addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported
Let me know if there's anything else I can try.
Networking looks good to me, and I don't think AppArmor is to blame.
Weirdly, I also had to map the docker-image to port 112, as Debian insists on using rpc-statd to start the client, so port 111 is occupied by rpcbind.
I've bumped into that once or twice in the past. Probably not related to our issue.
With the insecure-flag in the exports-file, I can get it to work with NFSv4 but not NFSv3 (So there's a little difference here).
That's our best clue so far, but it would be the first time I've ever seen NFSv4 work but not NFSv3! This might be worth trying to unravel a little more. In the last debug output you posted, I see that the server is still using secure
in your /etc/exports
nfs-server_1 | ----> /export *(rw,sync,wdelay,hide,nocrossmnt,secure,...
Does your copy of /etc/exports
contain the insecure
flag? I'd be curious to see the output of mount -vvv
with NFSv3 and the insecure
flag enabled. Clear as mud? :)
One other thing to check is the filesystem(s) of both your NFS share directory on the host (./data/nfs-export
) as well as your client mountpoint (/media/cdrom/
in your last message). I know that NFS doesn't like to serve from certain filesystems (I wanna say tmpfs
and maybe FAT32
). Safe to assume that both your Ubuntu and Debian machines are primarily using ext4 or something like btrfs?
Possibly stupid question. Are you able to perform unrelated, non-NFS mounts on this machine? e.g. manually mounting a hard drive, or a FUSE mount, or a bind mount? Just still trying to figure out if this is the OS messing with us, or a problem with NFS.
Thank you for bearing with me!
Mount output of NFS_VERSION: 3
with insecure
(-vvv or -v doesn't make a difference):
$ sudo mount -t nfs -o vers=3,port=112 127.0.11.20:/export /media/mynfsshare/ -vvv
mount.nfs: timeout set for Wed Apr 8 11:07:22 2020
mount.nfs: trying text-based options 'vers=3,port=112,addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported
Filesytem is BTRFS on the Ubuntu machine and Ext4 on the Debian one, both directories are on the same filesystem. I've mounted sshfs (=fuse) and a bind-mount on the mountpoint without issues :man_shrugging:
The port-weirdness and the fact that NFSv3 insecure does work on Ubuntu but not on the Debian machine made me think it might be a client-issue after all, so I setup the server to listen to their LAN-IPs and mounted stuff cross-machine. The behaviour was exactly the same. With the insecure
-flag, I'm able to mount NFSv3 and NFSv4 on the Ubuntu server, but only NFSv4 on the Debian server. With the secure
-flag, ~~I'm not able to mount anything~~
After switching from localhost (127.0.11.20) to the LAN-IP, I'm able to mount the Ubuntu-server both locally and across the machines with both versions and secure
:metal:
With Ubuntu as the client and Debian as the server, only NFSv4 works. Locally on the Debian server nothing works :thinking:
Except for the IP and port 111 vs. 112, they now have an identical configuration:
version: '3'
services:
nfs-server:
image: erichough/nfs-server
ports:
- 10.0.0.92:2049:2049
- 10.0.0.92:2049:2049/udp
- 10.0.0.92:111:111
- 10.0.0.92:32767:32767
- 10.0.0.92:32767:32767/udp
- 10.0.0.92:32765:32765
- 10.0.0.92:32765:32765/udp
volumes:
- ./nfs/exports.txt:/etc/exports:ro
- ./data/nfs-export:/export
- /lib/modules:/lib/modules:ro
cap_add:
- SYS_ADMIN
- SYS_MODULE
environment:
NFS_LOG_LEVEL: DEBUG
security_opt:
- apparmor=erichough-nfs
Here's their respective output:
Ubuntu
$ docker-compose up nfs-server
Starting workbench_nfs-server_1 ... done
Attaching to workbench_nfs-server_1
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SETTING UP ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> log level set to DEBUG
nfs-server_1 | ----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
nfs-server_1 | ----> /etc/exports is bind-mounted
nfs-server_1 | ----> kernel module nfs is loaded
nfs-server_1 | ----> kernel module nfsd is loaded
nfs-server_1 | ----> setup complete
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | STARTING SERVICES ...
nfs-server_1 | ==================================================================
nfs-server_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
nfs-server_1 | ----> mounting nfsd filesystem onto /proc/fs/nfsd
nfs-server_1 | mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
nfs-server_1 | ----> starting rpcbind
nfs-server_1 | ----> starting exportfs
nfs-server_1 | exporting *:/export
nfs-server_1 | ----> starting rpc.mountd on port 32767
nfs-server_1 | ----> starting rpc.statd on port 32765 (outgoing from port 32766)
nfs-server_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs-server_1 | rpc.nfsd: knfsd is currently down
nfs-server_1 | rpc.nfsd: Writing version string to kernel: -2 +3 +4 +4.1 +4.2
nfs-server_1 | rpc.nfsd: Created AF_INET TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET UDP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 TCP socket.
nfs-server_1 | rpc.nfsd: Created AF_INET6 UDP socket.
nfs-server_1 | ----> all services started normally
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | SERVER STARTUP COMPLETE
nfs-server_1 | ==================================================================
nfs-server_1 | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
nfs-server_1 | ----> list of container exports:
nfs-server_1 | ----> /export *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
nfs-server_1 | ----> list of container ports that should be exposed:
nfs-server_1 | ----> 111 (TCP and UDP)
nfs-server_1 | ----> 2049 (TCP and UDP)
nfs-server_1 | ----> 32765 (TCP and UDP)
nfs-server_1 | ----> 32767 (TCP and UDP)
nfs-server_1 |
nfs-server_1 | ==================================================================
nfs-server_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs-server_1 | ==================================================================
nfs-server_1 | rpc.statd: Version 2.3.4 starting
nfs-server_1 | rpc.statd: Flags: No-Daemon Log-STDERR TI-RPC
nfs-server_1 | rpc.statd: Local NSM state number: 3
nfs-server_1 | rpc.statd: Failed to open /proc/sys/fs/nfs/nsm_local_state: Read-only file system
nfs-server_1 | rpc.statd: Running as root. chown /var/lib/nfs to choose different user
nfs-server_1 | rpc.statd: Waiting for client connections
Debian
$ docker-compose up nfs-server
Starting 8d0ca15711aa_nfstest_nfs-server_1 ... done
Attaching to 8d0ca15711aa_nfstest_nfs-server_1
8d0ca15711aa_nfstest_nfs-server_1 |
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | SETTING UP ...
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | ----> log level set to DEBUG
8d0ca15711aa_nfstest_nfs-server_1 | ----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
8d0ca15711aa_nfstest_nfs-server_1 | ----> /etc/exports is bind-mounted
8d0ca15711aa_nfstest_nfs-server_1 | ----> kernel module nfs is loaded
8d0ca15711aa_nfstest_nfs-server_1 | ----> kernel module nfsd is loaded
8d0ca15711aa_nfstest_nfs-server_1 | ----> setup complete
8d0ca15711aa_nfstest_nfs-server_1 |
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | STARTING SERVICES ...
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | ----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
8d0ca15711aa_nfstest_nfs-server_1 | mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
8d0ca15711aa_nfstest_nfs-server_1 | ----> mounting nfsd filesystem onto /proc/fs/nfsd
8d0ca15711aa_nfstest_nfs-server_1 | mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
8d0ca15711aa_nfstest_nfs-server_1 | ----> starting rpcbind
8d0ca15711aa_nfstest_nfs-server_1 | ----> starting exportfs
8d0ca15711aa_nfstest_nfs-server_1 | exporting *:/export
8d0ca15711aa_nfstest_nfs-server_1 | ----> starting rpc.mountd on port 32767
8d0ca15711aa_nfstest_nfs-server_1 | ----> starting rpc.statd on port 32765 (outgoing from port 32766)
8d0ca15711aa_nfstest_nfs-server_1 | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
8d0ca15711aa_nfstest_nfs-server_1 | rpc.nfsd: knfsd is currently down
8d0ca15711aa_nfstest_nfs-server_1 | rpc.nfsd: Writing version string to kernel: -2 +3 +4 +4.1 +4.2
8d0ca15711aa_nfstest_nfs-server_1 | rpc.nfsd: Created AF_INET TCP socket.
8d0ca15711aa_nfstest_nfs-server_1 | rpc.nfsd: Created AF_INET UDP socket.
8d0ca15711aa_nfstest_nfs-server_1 | rpc.nfsd: Created AF_INET6 TCP socket.
8d0ca15711aa_nfstest_nfs-server_1 | rpc.nfsd: Created AF_INET6 UDP socket.
8d0ca15711aa_nfstest_nfs-server_1 | ----> all services started normally
8d0ca15711aa_nfstest_nfs-server_1 |
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | SERVER STARTUP COMPLETE
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
8d0ca15711aa_nfstest_nfs-server_1 | ----> list of container exports:
8d0ca15711aa_nfstest_nfs-server_1 | ----> /export *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)
8d0ca15711aa_nfstest_nfs-server_1 | ----> list of container ports that should be exposed:
8d0ca15711aa_nfstest_nfs-server_1 | ----> 111 (TCP and UDP)
8d0ca15711aa_nfstest_nfs-server_1 | ----> 2049 (TCP and UDP)
8d0ca15711aa_nfstest_nfs-server_1 | ----> 32765 (TCP and UDP)
8d0ca15711aa_nfstest_nfs-server_1 | ----> 32767 (TCP and UDP)
8d0ca15711aa_nfstest_nfs-server_1 |
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | READY AND WAITING FOR NFS CLIENT CONNECTIONS
8d0ca15711aa_nfstest_nfs-server_1 | ==================================================================
8d0ca15711aa_nfstest_nfs-server_1 | rpc.statd: Version 2.3.4 starting
8d0ca15711aa_nfstest_nfs-server_1 | rpc.statd: Flags: No-Daemon Log-STDERR TI-RPC
8d0ca15711aa_nfstest_nfs-server_1 | rpc.statd: Local NSM state number: 3
8d0ca15711aa_nfstest_nfs-server_1 | rpc.statd: Failed to open /proc/sys/fs/nfs/nsm_local_state: Read-only file system
8d0ca15711aa_nfstest_nfs-server_1 | rpc.statd: Running as root. chown /var/lib/nfs to choose different user
8d0ca15711aa_nfstest_nfs-server_1 | rpc.statd: Waiting for client connections
I have compared the logs line by line and they are identical.
Unfortunately I need a setup where I can run the server on 127.0.11.20. Do you have any further ideas with this new development?
After switching from localhost (127.0.11.20) to the LAN-IP, I'm able to mount the Ubuntu-server both locally and across the machines with both versions and
secure
This certainly feels like it's related to networking and the secure
/insecure
flag. I can't fathom why the secure
flag would make any difference ... ?
version: '3'
services:
nfs-server:
image: erichough/nfs-server
ports:
- 10.0.0.92:2049:2049
- 10.0.0.92:2049:2049/udp
- 10.0.0.92:111:111
- 10.0.0.92:32767:32767
- 10.0.0.92:32767:32767/udp
- 10.0.0.92:32765:32765
- 10.0.0.92:32765:32765/udp
...
Out of curiosity, is there any reason why you are being explicit with the IP in these port listings? Shouldn't make a difference, but might be worth trying ditching the IP just to see if anything changes.
Double check that your AppArmor profile is the one specified in the docs?
Anything interesting show up in /var/log/syslog
or anything else in /var/log
that looks interesting? Maybe run tail -f /var/log/*.log
on both the client and server then try to mount?
I wonder... There seem to be several issues relating to an inability to mount from the container. Could it be an issue between IPV4 and IPV6? No apologies for being long-winded, I hope there is enough information to start with. My suspicion: my local network is set up for both ipv4 and ipv6, but it seems the server is primarily listening for connections using ipv6. The container, however only uses ipv4, so the requests never get through.
What's below:
- My client: (Emperor)
- My Server (Magellan)
- Mount attempts from the client
- Docker
- Start the nfs-server container
- Log file: All seems fine
- Docker ps
- rpcinfo (Issue #41?)
- Netstat
- ifconfig: None of the veth0... adapters show an ipv4 address
- Container ifconfig - doesn't show an ipv6 address
My client: (Emperor)
otto@Emperor:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
My Server (Magellan)
user@magellan:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Mount attempts from the client
otto@Emperor:~$ sudo mount -v -t nfs magellan:/nfsdata/tinycore/presario_tce /mnt
[sudo] password for otto:
mount.nfs: timeout set for Tue Jun 16 11:12:41 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.0.2,clientaddr=192.168.0.100'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.0.2'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: trying text-based options 'addr=fe80::2a92:4aff:fe38:30d5'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Remote system error - Invalid argument
mount.nfs: an incorrect mount option was specified
otto@Emperor:~$ sudo mount -v -t nfs 192.168.0.2:/nfsdata/tinycore/presario_tce /mnt
mount.nfs: timeout set for Tue Jun 16 11:34:48 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.0.2,clientaddr=192.168.0.100'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.0.2'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported
otto@Emperor:~$ sudo mount -v -t nfs [ipv6:fe80:address:for:magellan:docker0]:/nfsdata/tinycore/presario_tce /mnt
mount.nfs: timeout set for Tue Jun 16 11:58:44 2020
mount.nfs: trying text-based options 'vers=4.2,addr=ipv6:fe80:address:for:magellan:docker0,clientaddr=::'
mount.nfs: mount(2): Invalid argument
mount.nfs: trying text-based options 'vers=4.1,addr=ipv6:fe80:address:for:magellan:docker0,clientaddr=::'
mount.nfs: mount(2): Invalid argument
mount.nfs: trying text-based options 'vers=4.0,addr=ipv6:fe80:address:for:magellan:docker0,clientaddr=::'
mount.nfs: mount(2): Invalid argument
mount.nfs: trying text-based options 'addr=ipv6:fe80:address:for:magellan:docker0'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Remote system error - Invalid argument
mount.nfs: an incorrect mount option was specified
Docker:
user@magellan:~$ docker --version
Docker version 19.03.11, build 42e35e61f3
Start the nfs-server container:
docker run -d --rm --name nfs-server \
-v /dockerdata/local/nfs:/nfsdata \
-v /dockerdata/local/nfs/exports:/etc/exports:ro \
--privileged \
-p 2049:2049 -p 2049:2049/udp \
-p 32765:32765 -p 32765:32765/udp \
-p 32767:32767 -p 32767:32767/udp \
--security-opt apparmor=erichough-nfs \
erichough/nfs-server
Log file: All seems fine
==================================================================
SETTING UP ...
==================================================================
----> setup complete
==================================================================
STARTING SERVICES ...
==================================================================
----> starting rpcbind
----> starting exportfs
----> starting rpc.mountd on port 32767
----> starting rpc.statd on port 32765 (outgoing from port 32766)
----> starting rpc.nfsd on port 2049 with 2 server thread(s)
----> all services started normally
==================================================================
SERVER STARTUP COMPLETE
==================================================================
----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
----> list of container exports:
----> /nfsdata/tinycore/presario_tce 192.168.0.0/24(no_root_squash,no_subtree_check)
----> /nfsdata/tinycore/armada_tce 192.168.0.0/24(no_root_squash,no_subtree_check)
----> list of container ports that should be exposed:
----> 111 (TCP and UDP)
----> 2049 (TCP and UDP)
----> 32765 (TCP and UDP)
----> 32767 (TCP and UDP)
==================================================================
READY AND WAITING FOR NFS CLIENT CONNECTIONS
==================================================================
Docker ps:
user@magellan:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8aa653e7a71a erichough/nfs-server "/usr/local/bin/entr…" 2 hours ago Up 2 hours 0.0.0.0:2049->2049/tcp, 0.0.0.0:2049->2049/udp, 0.0.0.0:32765->32765/tcp, 0.0.0.0:32765->32765/udp, 0.0.0.0:32767->32767/tcp, 0.0.0.0:32767->32767/udp nfs-server
1d56441f4886 nextcloud:latest "/entrypoint.sh apac…" 6 days ago Up 6 days 80/tcp, 8080/tcp nextcloud
752cd7d5ea8c roundcube/roundcubemail:latest "/docker-entrypoint.…" 6 days ago Up 6 days 80/tcp webmail
28389eb2457c mediawiki:latest "docker-php-entrypoi…" 6 days ago Up 6 days 80/tcp wiki
8cf1caf12b19 phpmyadmin/phpmyadmin:latest "/docker-entrypoint.…" 6 days ago Up 6 days 80/tcp, 8080/tcp, 9000/tcp phpmyadmin
bd7ebcf79507 tvial/docker-mailserver:latest "supervisord -c /etc…" 6 days ago Up 6 days 0.0.0.0:25->25/tcp, 110/tcp, 0.0.0.0:143->143/tcp, 0.0.0.0:587->587/tcp, 465/tcp, 995/tcp, 0.0.0.0:993->993/tcp, 4190/tcp mailserver
4046d4d1b106 mariadb:latest "docker-entrypoint.s…" 6 days ago Up 6 days 0.0.0.0:3306->3306/tcp mariadb
ddb14dc162dc boinc/client "start-boinc.sh" 6 days ago Up 6 days boinc
38dde9d91622 jgiannuzzi/gitolite:latest "/docker-entrypoint.…" 6 days ago Up 6 days 0.0.0.0:2222->22/tcp gitolite
cdd55ab914c8 jwilder/nginx-proxy:alpine "/app/docker-entrypo…" 6 days ago Up 6 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy
rpcinfo: (Issue #41?)
user@magellan:~$ rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
the output from rpcinfo -p magellan
on Emperor is the same.
Netstat:
user@magellan:~$ sudo netstat -aep
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:sunrpc 0.0.0.0:* LISTEN root 24106138 28002/rpcbind
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN root 19300444 19214/dnsmasq
tcp 0 0 192.168.0.2:domain 0.0.0.0:* LISTEN root 19300441 19214/dnsmasq
tcp 0 0 172.17.0.1:domain 0.0.0.0:* LISTEN root 19300438 19214/dnsmasq
tcp 0 0 172.18.0.1:domain 0.0.0.0:* LISTEN root 19300435 19214/dnsmasq
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN root 22321 1480/sshd
tcp 0 0 0.0.0.0:31416 0.0.0.0:* LISTEN root 35434 3724/boinc
tcp 0 0 localhost:3551 0.0.0.0:* LISTEN root 24983 1542/apcupsd
tcp 0 220 192.168.0.2:ssh Emperor.schreibke:35656 ESTABLISHED root 27396797 21415/sshd: user [p
tcp 1 0 192.168.0.2:38806 boincai02.cern.ch:https CLOSE_WAIT root 27357029 3724/boinc
tcp 32 0 192.168.0.2:39440 milkyway.cs.rpi.e:https CLOSE_WAIT root 27357183 3724/boinc
tcp 1 0 192.168.0.2:37172 milkyway.cs.rpi.ed:http CLOSE_WAIT root 27357179 3724/boinc
tcp6 0 0 [::]:mysql [::]:* LISTEN root 33824 3663/docker-proxy
tcp6 0 0 [::]:submission [::]:* LISTEN root 33790 3702/docker-proxy
tcp6 0 0 [::]:2222 [::]:* LISTEN root 33817 3650/docker-proxy
tcp6 0 0 [::]:sunrpc [::]:* LISTEN root 24106141 28002/rpcbind
tcp6 0 0 [::]:imap2 [::]:* LISTEN root 34874 3734/docker-proxy
tcp6 0 0 [::]:http [::]:* LISTEN root 33799 3635/docker-proxy
tcp6 0 0 localhost6.local:domain [::]:* LISTEN root 19300477 19214/dnsmasq
tcp6 0 0 fe80::2a92:4aff::domain [::]:* LISTEN root 19300474 19214/dnsmasq
tcp6 0 0 fe80::42:20ff:fe:domain [::]:* LISTEN root 19300471 19214/dnsmasq
tcp6 0 0 fe80::1445:1bff::domain [::]:* LISTEN root 19300468 19214/dnsmasq
tcp6 0 0 fe80::c449:bdff::domain [::]:* LISTEN root 19300465 19214/dnsmasq
tcp6 0 0 fe80::cce6:9dff::domain [::]:* LISTEN root 19300462 19214/dnsmasq
tcp6 0 0 fe80::7824:e7ff::domain [::]:* LISTEN root 19300459 19214/dnsmasq
tcp6 0 0 fe80::1c57:c2ff::domain [::]:* LISTEN root 19300456 19214/dnsmasq
tcp6 0 0 fe80::1a:c1ff:fe:domain [::]:* LISTEN root 19300453 19214/dnsmasq
tcp6 0 0 fe80::8464:a1ff::domain [::]:* LISTEN root 19300450 19214/dnsmasq
tcp6 0 0 fe80::c091:14ff::domain [::]:* LISTEN root 19300447 19214/dnsmasq
tcp6 0 0 [::]:ssh [::]:* LISTEN root 22330 1480/sshd
tcp6 0 0 [::]:smtp [::]:* LISTEN root 34903 3757/docker-proxy
tcp6 0 0 [::]:https [::]:* LISTEN root 32762 3605/docker-proxy
tcp6 0 0 [::]:32765 [::]:* LISTEN root 27520058 30100/docker-proxy
tcp6 0 0 [::]:32767 [::]:* LISTEN root 27517470 30074/docker-proxy
tcp6 0 0 [::]:nfs [::]:* LISTEN root 27520125 30124/docker-proxy
tcp6 0 0 [::]:imaps [::]:* LISTEN root 33765 3684/docker-proxy
udp 0 0 localhost:domain 0.0.0.0:* root 19300443 19214/dnsmasq
udp 0 0 192.168.0.2:domain 0.0.0.0:* root 19300440 19214/dnsmasq
udp 0 0 172.17.0.1:domain 0.0.0.0:* root 19300437 19214/dnsmasq
udp 0 0 172.18.0.1:domain 0.0.0.0:* root 19300434 19214/dnsmasq
udp 0 0 0.0.0.0:bootps 0.0.0.0:* root 19300431 19214/dnsmasq
udp 0 0 localhost:tftp 0.0.0.0:* root 19300445 19214/dnsmasq
udp 0 0 192.168.0.2:tftp 0.0.0.0:* root 19300442 19214/dnsmasq
udp 0 0 172.17.0.1:tftp 0.0.0.0:* root 19300439 19214/dnsmasq
udp 0 0 172.18.0.1:tftp 0.0.0.0:* root 19300436 19214/dnsmasq
udp 0 0 0.0.0.0:sunrpc 0.0.0.0:* root 24106136 28002/rpcbind
udp 0 0 0.0.0.0:618 0.0.0.0:* root 24106137 28002/rpcbind
udp6 0 0 [::]:32765 [::]:* root 27520099 30112/docker-proxy
udp6 0 0 [::]:32767 [::]:* root 27520034 30087/docker-proxy
udp6 0 0 localhost6.local:domain [::]:* root 19300476 19214/dnsmasq
udp6 0 0 fe80::2a92:4aff::domain [::]:* root 19300473 19214/dnsmasq
udp6 0 0 fe80::42:20ff:fe:domain [::]:* root 19300470 19214/dnsmasq
udp6 0 0 fe80::1445:1bff::domain [::]:* root 19300467 19214/dnsmasq
udp6 0 0 fe80::c449:bdff::domain [::]:* root 19300464 19214/dnsmasq
udp6 0 0 fe80::cce6:9dff::domain [::]:* root 19300461 19214/dnsmasq
udp6 0 0 fe80::7824:e7ff::domain [::]:* root 19300458 19214/dnsmasq
udp6 0 0 fe80::1c57:c2ff::domain [::]:* root 19300455 19214/dnsmasq
udp6 0 0 fe80::1a:c1ff:fe:domain [::]:* root 19300452 19214/dnsmasq
udp6 0 0 fe80::8464:a1ff::domain [::]:* root 19300449 19214/dnsmasq
udp6 0 0 fe80::c091:14ff::domain [::]:* root 19300446 19214/dnsmasq
udp6 0 0 localhost6.localdo:tftp [::]:* root 19300478 19214/dnsmasq
udp6 0 0 fe80::2a92:4aff:fe:tftp [::]:* root 19300475 19214/dnsmasq
udp6 0 0 fe80::42:20ff:fe71:tftp [::]:* root 19300472 19214/dnsmasq
udp6 0 0 fe80::1445:1bff:fe:tftp [::]:* root 19300469 19214/dnsmasq
udp6 0 0 fe80::c449:bdff:fe:tftp [::]:* root 19300466 19214/dnsmasq
udp6 0 0 fe80::cce6:9dff:fe:tftp [::]:* root 19300463 19214/dnsmasq
udp6 0 0 fe80::7824:e7ff:fe:tftp [::]:* root 19300460 19214/dnsmasq
udp6 0 0 fe80::1c57:c2ff:fe:tftp [::]:* root 19300457 19214/dnsmasq
udp6 0 0 fe80::1a:c1ff:fed2:tftp [::]:* root 19300454 19214/dnsmasq
udp6 0 0 fe80::8464:a1ff:fe:tftp [::]:* root 19300451 19214/dnsmasq
udp6 0 0 fe80::c091:14ff:fe:tftp [::]:* root 19300448 19214/dnsmasq
udp6 0 0 [::]:sunrpc [::]:* root 24106139 28002/rpcbind
udp6 0 0 fe80::2a9:dhcpv6-client [::]:* systemd-network 19449 1188/systemd-networ
udp6 0 0 [::]:618 [::]:* root 24106140 28002/rpcbind
udp6 0 0 [::]:nfs [::]:* root 27520170 30136/docker-proxy
raw6 0 0 [::]:ipv6-icmp [::]:* 7 systemd-network 20908 1188/systemd-networ
<snip>
unix 2 [ ACC ] STREAM LISTENING 27517518 30145/containerd-sh @/containerd-shim/moby/8aa653e7a71ae42d7a74d921294eaa9676cc703c635ccd6dca62a761d1151f4d/shim.sock@
<snip>
unix 3 [ ] STREAM CONNECTED 27517523 30145/containerd-sh @/containerd-shim/moby/8aa653e7a71ae42d7a74d921294eaa9676cc703c635ccd6dca62a761d1151f4d/shim.sock@
<snip>
ifconfig: None of the veth0... adapters show an ipv4 address
user@magellan:~$ ifconfig
br-be80c0ce35b7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:20ff:fe71:844b prefixlen 64 scopeid 0x20<link>
ether 02:42:20:71:84:4b txqueuelen 0 (Ethernet)
RX packets 589881 bytes 98718643 (98.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 530470 bytes 204000084 (204.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:3dff:fe45:5198 prefixlen 64 scopeid 0x20<link>
ether 02:42:3d:45:51:98 txqueuelen 0 (Ethernet)
RX packets 257 bytes 19668 (19.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 447 bytes 49350 (49.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.2 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::2a92:4aff:fe38:30d5 prefixlen 64 scopeid 0x20<link>
ether 28:92:4a:38:30:d5 txqueuelen 1000 (Ethernet)
RX packets 19548693 bytes 26724319283 (26.7 GB)
RX errors 0 dropped 262561 overruns 0 frame 0
TX packets 3144982 bytes 416262380 (416.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 18
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 14675162 bytes 880657103 (880.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14675162 bytes 880657103 (880.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth0c2a01f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::28a6:bbff:fe6b:5f21 prefixlen 64 scopeid 0x20<link>
ether 2a:a6:bb:6b:5f:21 txqueuelen 0 (Ethernet)
RX packets 127 bytes 11042 (11.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 211 bytes 23806 (23.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth0dc4bac: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::c449:bdff:fe83:325f prefixlen 64 scopeid 0x20<link>
ether c6:49:bd:83:32:5f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 620 bytes 41292 (41.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth2b5013c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::1445:1bff:fe94:8537 prefixlen 64 scopeid 0x20<link>
ether 16:45:1b:94:85:37 txqueuelen 0 (Ethernet)
RX packets 136828 bytes 44690749 (44.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 83372 bytes 37870087 (37.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth2b5d33c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::8464:a1ff:fe9e:dddd prefixlen 64 scopeid 0x20<link>
ether 86:64:a1:9e:dd:dd txqueuelen 0 (Ethernet)
RX packets 84547 bytes 37226374 (37.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 83402 bytes 88339653 (88.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth2fbbbff: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::1c57:c2ff:fe68:2e80 prefixlen 64 scopeid 0x20<link>
ether 1e:57:c2:68:2e:80 txqueuelen 0 (Ethernet)
RX packets 9656 bytes 2566726 (2.5 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10449 bytes 52443671 (52.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth44b2808: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::cce6:9dff:fed2:b361 prefixlen 64 scopeid 0x20<link>
ether ce:e6:9d:d2:b3:61 txqueuelen 0 (Ethernet)
RX packets 86834 bytes 126882723 (126.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 103527 bytes 22828924 (22.8 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth59d8e43: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::1a:c1ff:fed2:9727 prefixlen 64 scopeid 0x20<link>
ether 02:1a:c1:d2:97:27 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 616 bytes 40960 (40.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethd840166: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::c091:14ff:fe8f:b6de prefixlen 64 scopeid 0x20<link>
ether c2:91:14:8f:b6:de txqueuelen 0 (Ethernet)
RX packets 73500 bytes 20230153 (20.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 70028 bytes 40706865 (40.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethe90fd11: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::7824:e7ff:fe83:77dd prefixlen 64 scopeid 0x20<link>
ether 7a:24:e7:83:77:dd txqueuelen 0 (Ethernet)
RX packets 496835 bytes 82946368 (82.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 481020 bytes 169576090 (169.5 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Container ifconfig - doesn't show an ipv6 address
user@magellan:~$ docker exec -ti nfs-server 'ifconfig'
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:211 errors:0 dropped:0 overruns:0 frame:0
TX packets:127 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:23806 (23.2 KiB) TX bytes:11042 (10.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:140 (140.0 B) TX bytes:140 (140.0 B)
I had a similar problem. Turned out that on my OpenStack the new Ubuntu 20.04 was with 5.4.0-1026-kvm
kernel, instead of -generic
kernel. The problem was many modules no included and mainly no modprobe nfsd
. After changing to image with generic kernel all worked well.
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-server-volume
spec:
storageClassName: local-storage
capacity:
storage: 2Gi
accessModes: ["ReadWriteOnce"]
hostPath:
path: "/Users/brandonros/Desktop/nfs-server-volume"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nfs-server
spec:
replicas: 1
serviceName: nfs-server
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
spec:
containers:
- name: nfs-server
image: registry.hub.docker.com/erichough/nfs-server:2.2.1
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
- SYS_MODULE
env:
- name: NFS_EXPORT_0
value: "/mnt *"
ports:
- containerPort: 2049
volumeMounts:
- mountPath: /mnt
name: nfs-server-volume
- mountPath: /lib/modules
name: lib-modules
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
volumeClaimTemplates:
- metadata:
name: nfs-server-volume
spec:
storageClassName: local-storage
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
$ kubectl logs pods/nfs-server-0 -n redacted
==================================================================
SETTING UP ...
==================================================================
----> building /etc/exports from environment variables
----> collected 1 valid export(s) from NFS_EXPORT_* environment variables
----> kernel module nfs is missing
----> attempting to load kernel module nfs
modprobe: can't load module nfs_ssc (kernel/fs/nfs_common/nfs_ssc.ko): kernel does not support requested operation
---->
----> ERROR: unable to dynamically load kernel module nfs. try modprobe nfs on the Docker host
---->
I had the same problem, the only way to solve it in order to use the secure option in the exports, was to change the network mode from bridge to host mode in docker. I am using docker-compose, I just added network_mode: host in my compose file.