nfs-ganesha icon indicating copy to clipboard operation
nfs-ganesha copied to clipboard

nfs-ganesha-gluster on ubuntu 22.04

Open urbaman opened this issue 1 year ago • 9 comments

Hi,

I'm trying to export a gluster volume via nfs-ganesha, but clients seem not to be able to mount the exported volume.

When I try to mount, the command goes fine, but I receive an Unknown error 524 when I do an "ls" on the mounted path.

mount command:

sudo mount -t nfs gluster1.urbaman.it:/HDD5T /mnt

logs on the client:

May  3 15:50:08 nfs3 kernel: [10724.959413] FS-Cache: Loaded
May  3 15:50:09 nfs3 kernel: [10725.010123] FS-Cache: Netfs 'nfs' registered for caching
May  3 15:50:09 nfs3 kernel: [10725.232646] NFS: Registering the id_resolver key type
May  3 15:50:09 nfs3 kernel: [10725.232670] Key type id_resolver registered
May  3 15:50:09 nfs3 kernel: [10725.232672] Key type id_legacy registered
May  3 15:50:23 nfs3 systemd[1]: mnt.mount: Deactivated successfully.
May  3 15:51:46 nfs3 systemd[1]: mnt.mount: Deactivated successfully.

no useful logs on the server in the same timeframe, on the server I see the export:

ubuntu@nfs1:~$ showmount -e localhost
Export list for localhost:
/HDD5T (everyone)

thank you,

urbaman avatar May 03 '23 14:05 urbaman

524 is ENOTSUPP (rather than the usual ENOTSUP). It seems to be a kernel specific error. Also conversion of NFSERR_NOTSUPP or NFS4ERR_NOTSUPP to ENOTSUPP.

Can you provide a tcpdump (binary format) of the ls attempt?

Your client is trying something that Ganesha doesn't support...

ffilz avatar May 03 '23 18:05 ffilz

Hi,

The tcpdump shows:

22:20:10.267106 IP nfs1.urbaman.it.nfs > mngmnt-desktop.urbaman.it.738: Flags [P.], seq 4333:4433, ack 5541, win 454, options [nop,nop,TS val 3703214361 ecr 1717648367], length 100: NFS reply xid 883601419 reply ok 96 getattr ERROR: Operation not supported

I do not understand why, "ls" should be the easiest command to go, referenced in all of the guides as a test for the export. (mounting with nfs-common on ubuntu, to /mnt)

Thank you.

urbaman avatar May 03 '23 20:05 urbaman

Other info.

I can see the mounted export:

df -h
File system     Dim. Usati Dispon. Uso% Montato su
tmpfs           1,2G  1,6M    1,2G   1% /run
/dev/sda3        31G   19G     11G  63% /
tmpfs           5,9G     0    5,9G   0% /dev/shm
tmpfs           5,0M     0    5,0M   0% /run/lock
/dev/sda2       512M  6,1M    506M   2% /boot/efi
tmpfs           1,2G  2,4M    1,2G   1% /run/user/1000
nfs1:/HDD5T     8,9T   91G    8,8T   1% /mnt

But I cannot use any command (cd, ls, ...), always 524 error.

urbaman avatar May 03 '23 20:05 urbaman

Could you do a binary tcpdump capture and share it, such as:

tcpdump -s0 -w /tmp/trace.tpc

I'd like to see in all it's glory the GETATTR that's resulting in NFS4ERR_NOTSUPP

The client is presumably requesting an attribute Ganesha doesn't support, but without knowing which attributes it's requesting, I can't tell you what should be happening.

ffilz avatar May 03 '23 20:05 ffilz

Hi, here's the file.

I delimited the traffic between client and server. There's the traffic for the mount, the "ls" and the umount commands. trace.zip

urbaman avatar May 03 '23 21:05 urbaman

Hmm, it's happening on an ACCESS call. Something odd is going on...

Can you run with the following LOG config:

LOG
{
	COMPONENTS
	{
		FSAL=FULL_DEBUG;
	}
}

But before that, a quick question, are you using ACLs on your Gluster filesystem, or have you set any ACL options in your Ganesha config? That is the most likely source of the error.

ffilz avatar May 03 '23 23:05 ffilz

Hi,

Regular fresh install of both gluster and ganesha, so no ACLs active as far as I know.

Here's ganesha log starting with the restart (to get the new log level):

04/05/2023 09:22:03 : epoch 6452cc0f : nfs1 : ganesha.nfsd-220056[Admin] destroy_fsals :FSAL :EVENT :Unloading FSAL GLUSTER
04/05/2023 09:22:03 : epoch 6452cc0f : nfs1 : ganesha.nfsd-220056[Admin] destroy_fsals :FSAL :EVENT :FSAL GLUSTER unloaded
04/05/2023 09:22:03 : epoch 6452cc0f : nfs1 : ganesha.nfsd-220056[Admin] do_shutdown :MAIN :EVENT :FSAL system destroyed.
04/05/2023 09:22:03 : epoch 6452cc0f : nfs1 : ganesha.nfsd-220056[main] nfs_start :MAIN :EVENT :NFS EXIT: regular exit
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569928[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 3.5
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] log_conf_commit :CONFIG :EVENT :Switching to new component log levels
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :FileSystem info for FSAL PSEUDO {
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxfilesize  = 200
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxlink  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxnamelen  = 255
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxpathlen  = 4096
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  no_trunc  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  chown_restricted  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  case_insensitive  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  case_preserving  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  link_support  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  symlink_support  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  lock_support  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  lock_support_async_block  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  named_attr  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  unique_handles  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  acl_support  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  cansettime  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  homogenous  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  supported_attrs  = 11DFCE
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxread  = 67108864
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxwrite  = 67108864
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  umask  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  auth_exportpath_xdev  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  delegations = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  pnfs_mds = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  pnfs_ds = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  fsal_trace = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  fsal_grace = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  expire_time_parent = -1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :}
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] init_config :FSAL :DEBUG :FSAL INIT: Supported attributes mask = 0x11dfce
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] glusterfs_init :FSAL :DEBUG :FSAL Gluster initialized
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] load_fsal :FSAL :F_DBG :FSAL GLUSTER refcount 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :FileSystem info for FSAL GLUSTER {
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxfilesize  = 7FFFFFFFFFFFFFFF
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxlink  = 8
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxnamelen  = 1024
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxpathlen  = 1024
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  no_trunc  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  chown_restricted  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  case_insensitive  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  case_preserving  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  link_support  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  symlink_support  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  lock_support  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  lock_support_async_block  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  named_attr  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  unique_handles  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  acl_support  = 3
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  cansettime  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  homogenous  = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  supported_attrs  = 211DFEE
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxread  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  maxwrite  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  umask  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  auth_exportpath_xdev  = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  delegations = 3
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  pnfs_mds = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  pnfs_ds = 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  fsal_trace = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  fsal_grace = 0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :  expire_time_parent = -1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] display_fsinfo :FSAL :DEBUG :}
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] glusterfs_create_export :FSAL :DEBUG :In args: export path = /HDD5T
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] glusterfs_create_export :FSAL :EVENT :Volume HDD5T exported at : '/'
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] mdcache_fsal_create_export :FSAL :F_DBG :FSAL MDCACHE refcount 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] lookup_fsal :FSAL :F_DBG :FSAL PSEUDO refcount 1
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] pseudofs_create_export :FSAL :DEBUG :Created exp 0x55c4ade3dd40 - /
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] mdcache_fsal_create_export :FSAL :F_DBG :FSAL MDCACHE refcount 2
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:7): Unknown parameter (Transports)
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] lookup_path :FSAL :F_DBG :In args: path = /HDD5T
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] makedir :FSAL :DEBUG :create HDD5T
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] makedir :FSAL :F_DBG :/ numlinks 3
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] getattrs :FSAL :F_DBG :hdl=0x55c4ade68d10, name=/ numlinks 3
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] fsal_create :FSAL :F_DBG :Returning obj=0x55c4ade788a8 status=No error for HDD5T FSAL=MDCACHE
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] fsal_save_ganesha_credentials :FSAL :INFO :Ganesha uid=0 gid=0 ngroups=0
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
04/05/2023 09:22:03 : epoch 64535d1b : nfs1 : ganesha.nfsd-569930[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)

Here's ganesha-gfapi.log (it seems to write in a different time zone, while being on the very same machine, machine's time is 09:22 - also see gluster's log):

[2023-05-04 07:22:03.538871 +0000] I [timer.c:85:gf_timer_call_cancel] (-->/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_log_fini+0x15) [0x7f0231c959b5] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_log_disable_suppression_before_exit+0x37) [0x7f0231c95987] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_timer_call_cancel+0xed) [0x7f0231ca35ed] ) 0-timer: ctx cleanup started
[2023-05-04 07:22:03.698348 +0000] I [io-stats.c:3701:ios_sample_buf_size_configure] 0-HDD5T: Configure ios_sample_buf  size is 1024 because ios_sample_interval is 0
[2023-05-04 07:22:03.705064 +0000] I [MSGID: 104045] [glfs-primary.c:78:notify] 0-gfapi: New graph coming up [{graph-uuid=6e667331-2d35-3639-3933-302d32303233}, {id=0}]
[2023-05-04 07:22:03.705154 +0000] I [MSGID: 114020] [client.c:2336:notify] 0-HDD5T-client-0: parent translators are ready, attempting connect on transport []
[2023-05-04 07:22:03.706352 +0000] I [MSGID: 114020] [client.c:2336:notify] 0-HDD5T-client-1: parent translators are ready, attempting connect on transport []
[2023-05-04 07:22:03.707005 +0000] I [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-HDD5T-client-0: changing port to 51703 (from 0)
[2023-05-04 07:22:03.707092 +0000] I [socket.c:834:__socket_shutdown] 0-HDD5T-client-0: intentional socket shutdown(9)
[2023-05-04 07:22:03.707296 +0000] I [MSGID: 114020] [client.c:2336:notify] 0-HDD5T-client-2: parent translators are ready, attempting connect on transport []
[2023-05-04 07:22:03.708948 +0000] I [MSGID: 114057] [client-handshake.c:1118:select_server_supported_programs] 0-HDD5T-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-05-04 07:22:03.709042 +0000] I [MSGID: 114020] [client.c:2336:notify] 0-HDD5T-client-3: parent translators are ready, attempting connect on transport []
[2023-05-04 07:22:03.709126 +0000] I [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-HDD5T-client-1: changing port to 59566 (from 0)
[2023-05-04 07:22:03.709200 +0000] I [socket.c:834:__socket_shutdown] 0-HDD5T-client-1: intentional socket shutdown(10)
[2023-05-04 07:22:03.710040 +0000] I [MSGID: 114020] [client.c:2336:notify] 0-HDD5T-client-4: parent translators are ready, attempting connect on transport []
[2023-05-04 07:22:03.711216 +0000] I [MSGID: 114057] [client-handshake.c:1118:select_server_supported_programs] 0-HDD5T-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-05-04 07:22:03.711869 +0000] I [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-HDD5T-client-2: changing port to 60242 (from 0)
[2023-05-04 07:22:03.711936 +0000] I [socket.c:834:__socket_shutdown] 0-HDD5T-client-2: intentional socket shutdown(9)
[2023-05-04 07:22:03.711982 +0000] I [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-HDD5T-client-3: changing port to 54243 (from 0)
[2023-05-04 07:22:03.712019 +0000] I [socket.c:834:__socket_shutdown] 0-HDD5T-client-3: intentional socket shutdown(12)
[2023-05-04 07:22:03.712225 +0000] I [MSGID: 114020] [client.c:2336:notify] 0-HDD5T-client-5: parent translators are ready, attempting connect on transport []
Final graph:
+------------------------------------------------------------------------------+
  1: volume HDD5T-client-0
  2:     type protocol/client
  3:     option opversion 100000
  4:     option clnt-lk-version 1
  5:     option volfile-checksum 0
  6:     option volfile-key HDD5T
  7:     option client-version 10.1
  8:     option process-name gfapi.ganesha.nfsd
  9:     option process-uuid CTX_ID:434897c9-a6af-49c2-bb4a-6d30a6857528-GRAPH_ID:0-PID:569930-HOST:nfs1-PC_NAME:HDD5T-client-0-RECON_NO:-0
 10:     option fops-version 1298437
 11:     option ping-timeout 42
 12:     option remote-host gluster1.urbaman.it
 13:     option remote-subvolume /HDD5T11
 14:     option transport-type socket
 15:     option transport.address-family inet
 16:     option username b3ebc2cb-f40f-458a-b8b7-7dae1042a52d
 17:     option password b5d8c727-28eb-4c0d-a83f-c2c0407e8182
 18:     option transport.socket.ssl-enabled off
 19:     option transport.tcp-user-timeout 0
 20:     option transport.socket.keepalive-time 20
 21:     option transport.socket.keepalive-interval 2
 22:     option transport.socket.keepalive-count 9
 23:     option strict-locks off
 24:     option send-gids true
 25: end-volume
 26:
 27: volume HDD5T-client-1
 28:     type protocol/client
 29:     option opversion 100000
 30:     option clnt-lk-version 1
 31:     option volfile-checksum 0
 32:     option volfile-key HDD5T
 33:     option client-version 10.1
 34:     option process-name gfapi.ganesha.nfsd
 35:     option process-uuid CTX_ID:434897c9-a6af-49c2-bb4a-6d30a6857528-GRAPH_ID:0-PID:569930-HOST:nfs1-PC_NAME:HDD5T-client-1-RECON_NO:-0
 36:     option fops-version 1298437
 37:     option ping-timeout 42
 38:     option remote-host gluster1.urbaman.it
 39:     option remote-subvolume /HDD5T12
 40:     option transport-type socket
 41:     option transport.address-family inet
 42:     option username b3ebc2cb-f40f-458a-b8b7-7dae1042a52d
 43:     option password b5d8c727-28eb-4c0d-a83f-c2c0407e8182
 44:     option transport.socket.ssl-enabled off
 45:     option transport.tcp-user-timeout 0
 46:     option transport.socket.keepalive-time 20
 47:     option transport.socket.keepalive-interval 2
 48:     option transport.socket.keepalive-count 9
 49:     option strict-locks off
 50:     option send-gids true
 51: end-volume
 52:
 53: volume HDD5T-client-2
 54:     type protocol/client
 55:     option ping-timeout 42
 56:     option remote-host gluster2.urbaman.it
 57:     option remote-subvolume /HDD5T21
 58:     option transport-type socket
 59:     option transport.address-family inet
 60:     option username b3ebc2cb-f40f-458a-b8b7-7dae1042a52d
 61:     option password b5d8c727-28eb-4c0d-a83f-c2c0407e8182
 62:     option transport.socket.ssl-enabled off
 63:     option transport.tcp-user-timeout 0
 64:     option transport.socket.keepalive-time 20
 65:     option transport.socket.keepalive-interval 2
 66:     option transport.socket.keepalive-count 9
 67:     option strict-locks off
 68:     option send-gids true
 69: end-volume
 70:
 71: volume HDD5T-replicate-0
 72:     type cluster/replicate
 73:     option afr-pending-xattr HDD5T-client-0,HDD5T-client-1,HDD5T-client-2
 74:     option volume-id 7a19b3b6-2312-4694-ad71-cb7a2c444657
 75:     option granular-entry-heal on
 76:     option use-compound-fops off
 77:     option use-anonymous-inode yes
 78:     subvolumes HDD5T-client-0 HDD5T-client-1 HDD5T-client-2
 79: end-volume
 80:
 81: volume HDD5T-client-3
 82:     type protocol/client
 83:     option ping-timeout 42
 84:     option remote-host gluster2.urbaman.it
 85:     option remote-subvolume /HDD5T22
 86:     option transport-type socket
 87:     option transport.address-family inet
 88:     option username b3ebc2cb-f40f-458a-b8b7-7dae1042a52d
 89:     option password b5d8c727-28eb-4c0d-a83f-c2c0407e8182
 90:     option transport.socket.ssl-enabled off
 91:     option transport.tcp-user-timeout 0
 92:     option transport.socket.keepalive-time 20
 93:     option transport.socket.keepalive-interval 2
 94:     option transport.socket.keepalive-count 9
 95:     option strict-locks off
 96:     option send-gids true
 97: end-volume
 98:
 99: volume HDD5T-client-4
100:     type protocol/client
101:     option ping-timeout 42
102:     option remote-host gluster3.urbaman.it
103:     option remote-subvolume /HDD5T31
104:     option transport-type socket
105:     option transport.address-family inet
106:     option username b3ebc2cb-f40f-458a-b8b7-7dae1042a52d
107:     option password b5d8c727-28eb-4c0d-a83f-c2c0407e8182
[2023-05-04 07:22:03.714636 +0000] I [MSGID: 114046] [client-handshake.c:856:client_setvolume_cbk] 0-HDD5T-client-0: Connected, attached to remote volume [{conn-name=HDD5T-client-0}, {remote_subvol=/HDD5T11}]
[2023-05-04 07:22:03.714748 +0000] I [MSGID: 108005] [afr-common.c:6045:__afr_handle_child_up_event] 0-HDD5T-replicate-0: Subvolume 'HDD5T-client-0' came back up; going online.
108:     option transport.socket.ssl-enabled off
109:     option transport.tcp-user-timeout 0
110:     option transport.socket.keepalive-time 20
111:     option transport.socket.keepalive-interval 2
112:     option transport.socket.keepalive-count 9
113:     option strict-locks off
114:     option send-gids true
115: end-volume
116:
117: volume HDD5T-client-5
118:     type protocol/client
119:     option ping-timeout 42
120:     option remote-host gluster3.urbaman.it
121:     option remote-subvolume /HDD5T32
122:     option transport-type socket
123:     option transport.address-family inet
124:     option username b3ebc2cb-f40f-458a-b8b7-7dae1042a52d
125:     option password b5d8c727-28eb-4c0d-a83f-c2c0407e8182
126:     option transport.socket.ssl-enabled off
127:     option transport.tcp-user-timeout 0
128:     option transport.socket.keepalive-time 20
129:     option transport.socket.keepalive-interval 2
130:     option transport.socket.keepalive-count 9
131:     option strict-locks off
132:     option send-gids true
133: end-volume
134:
135: volume HDD5T-replicate-1
136:     type cluster/replicate
137:     option afr-pending-xattr HDD5T-client-3,HDD5T-client-4,HDD5T-client-5
138:     option volume-id 7a19b3b6-2312-4694-ad71-cb7a2c444657
139:     option granular-entry-heal on
140:     option use-compound-fops off
141:     option use-anonymous-inode yes
142:     subvolumes HDD5T-client-3 HDD5T-client-4 HDD5T-client-5
143: end-volume
144:
145: volume HDD5T-dht
146:     type cluster/distribute
147:     option lock-migration off
148:     option force-migration off
149:     subvolumes HDD5T-replicate-0 HDD5T-replicate-1
150: end-volume
151:
152: volume HDD5T-utime
153:     type features/utime
154:     option noatime on
155:     subvolumes HDD5T-dht
156: end-volume
157:
158: volume HDD5T-write-behind
159:     type performance/write-behind
160:     subvolumes HDD5T-utime
161: end-volume
162:
163: volume HDD5T-open-behind
164:     type performance/open-behind
165:     subvolumes HDD5T-write-behind
166: end-volume
167:
168: volume HDD5T-quick-read
169:     type performance/quick-read
170:     subvolumes HDD5T-open-behind
171: end-volume
172:
173: volume HDD5T-md-cache
174:     type performance/md-cache
175:     subvolumes HDD5T-quick-read
176: end-volume
177:
178: volume HDD5T
179:     type debug/io-stats
180:     option log-level INFO
181:     option threads 16
182:     option latency-measurement off
183:     option count-fop-hits off
184:     option global-threading off
185:     subvolumes HDD5T-md-cache
186: end-volume
187:
188: volume meta-autoload
189:     type meta
190:     subvolumes HDD5T
191: end-volume
192:
+------------------------------------------------------------------------------+
[2023-05-04 07:22:03.715771 +0000] I [MSGID: 114046] [client-handshake.c:856:client_setvolume_cbk] 0-HDD5T-client-1: Connected, attached to remote volume [{conn-name=HDD5T-client-1}, {remote_subvol=/HDD5T12}]
[2023-05-04 07:22:03.716334 +0000] I [MSGID: 108002] [afr-common.c:6417:afr_notify] 0-HDD5T-replicate-0: Client-quorum is met
[2023-05-04 07:22:03.716430 +0000] I [MSGID: 114057] [client-handshake.c:1118:select_server_supported_programs] 0-HDD5T-client-2: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-05-04 07:22:03.716709 +0000] I [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-HDD5T-client-4: changing port to 57467 (from 0)
[2023-05-04 07:22:03.716758 +0000] I [socket.c:834:__socket_shutdown] 0-HDD5T-client-4: intentional socket shutdown(10)
[2023-05-04 07:22:03.716804 +0000] I [MSGID: 114057] [client-handshake.c:1118:select_server_supported_programs] 0-HDD5T-client-3: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-05-04 07:22:03.718045 +0000] I [rpc-clnt.c:1969:rpc_clnt_reconfig] 0-HDD5T-client-5: changing port to 53049 (from 0)
[2023-05-04 07:22:03.718098 +0000] I [socket.c:834:__socket_shutdown] 0-HDD5T-client-5: intentional socket shutdown(9)
[2023-05-04 07:22:03.718997 +0000] I [MSGID: 114057] [client-handshake.c:1118:select_server_supported_programs] 0-HDD5T-client-4: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-05-04 07:22:03.720322 +0000] I [MSGID: 114057] [client-handshake.c:1118:select_server_supported_programs] 0-HDD5T-client-5: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-05-04 07:22:03.720421 +0000] I [MSGID: 114046] [client-handshake.c:856:client_setvolume_cbk] 0-HDD5T-client-2: Connected, attached to remote volume [{conn-name=HDD5T-client-2}, {remote_subvol=/HDD5T21}]
[2023-05-04 07:22:03.721060 +0000] I [MSGID: 114046] [client-handshake.c:856:client_setvolume_cbk] 0-HDD5T-client-3: Connected, attached to remote volume [{conn-name=HDD5T-client-3}, {remote_subvol=/HDD5T22}]
[2023-05-04 07:22:03.721091 +0000] I [MSGID: 108005] [afr-common.c:6045:__afr_handle_child_up_event] 0-HDD5T-replicate-1: Subvolume 'HDD5T-client-3' came back up; going online.
[2023-05-04 07:22:03.722951 +0000] I [MSGID: 114046] [client-handshake.c:856:client_setvolume_cbk] 0-HDD5T-client-4: Connected, attached to remote volume [{conn-name=HDD5T-client-4}, {remote_subvol=/HDD5T31}]
[2023-05-04 07:22:03.723762 +0000] I [MSGID: 114046] [client-handshake.c:856:client_setvolume_cbk] 0-HDD5T-client-5: Connected, attached to remote volume [{conn-name=HDD5T-client-5}, {remote_subvol=/HDD5T32}]
[2023-05-04 07:22:03.726201 +0000] I [MSGID: 108002] [afr-common.c:6417:afr_notify] 0-HDD5T-replicate-1: Client-quorum is met
[2023-05-04 07:22:03.732042 +0000] I [MSGID: 108031] [afr-common.c:3201:afr_local_discovery_cbk] 0-HDD5T-replicate-0: selecting local read_child HDD5T-client-0
[2023-05-04 07:22:03.732116 +0000] I [MSGID: 108031] [afr-common.c:3201:afr_local_discovery_cbk] 0-HDD5T-replicate-0: selecting local read_child HDD5T-client-1
[2023-05-04 07:22:03.734083 +0000] I [MSGID: 104041] [glfs-resolve.c:973:__glfs_active_subvol] 0-HDD5T: switched to graph [{subvol=6e667331-2d35-3639-3933-302d32303233}, {id=0}]

Here's Gluster (it seems to write in a different time zone, while being on the very same machine, machine's time is 09:22) from my last tests, it includes this test - last two lines.

[2023-05-03 19:47:07.539618 +0000] I [MSGID: 106496] [glusterd-handshake.c:954:__server_getspec] 0-management: Received mount request for volume HDD5T
[2023-05-03 19:47:07.601650 +0000] I [MSGID: 106164] [glusterd-handshake.c:1561:__server_get_volume_info] 0-management: Received get volume info req []
[2023-05-03 19:47:58.678910 +0000] I [MSGID: 106496] [glusterd-handshake.c:954:__server_getspec] 0-management: Received mount request for volume HDD5T
[2023-05-03 19:47:58.736737 +0000] I [MSGID: 106164] [glusterd-handshake.c:1561:__server_get_volume_info] 0-management: Received get volume info req []
[2023-05-03 21:03:11.760989 +0000] I [MSGID: 106496] [glusterd-handshake.c:954:__server_getspec] 0-management: Received mount request for volume HDD5T
[2023-05-03 21:03:11.821648 +0000] I [MSGID: 106164] [glusterd-handshake.c:1561:__server_get_volume_info] 0-management: Received get volume info req []
[2023-05-04 07:22:03.690593 +0000] I [MSGID: 106496] [glusterd-handshake.c:954:__server_getspec] 0-management: Received mount request for volume HDD5T
[2023-05-04 07:22:03.751418 +0000] I [MSGID: 106164] [glusterd-handshake.c:1561:__server_get_volume_info] 0-management: Received get volume info req []

Here's gluster volume status with options enabled/disabled

Volume Name: HDD5T
Type: Distributed-Replicate
Volume ID: 7a19b3b6-2312-4694-ad71-cb7a2c444657
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1.urbaman.it:/HDD5T11
Brick2: gluster1.urbaman.it:/HDD5T12
Brick3: gluster2.urbaman.it:/HDD5T21
Brick4: gluster2.urbaman.it:/HDD5T22
Brick5: gluster3.urbaman.it:/HDD5T31
Brick6: gluster3.urbaman.it:/HDD5T32
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on

urbaman avatar May 04 '23 07:05 urbaman

So the Ganesha log timestamps line up with the system clock and the Gluster ones are 2 hours off? There must be a configuration somewhere that is telling Gluster a different timezone setting...

What is your Ganesha config?

And then I need to see the Ganesha log from a failing ls with the FSAL component at FULL_DEBUG to see if I can figure out why you're getting an NFS4ERR_NOTSUPP

ffilz avatar May 04 '23 14:05 ffilz

Hi,

I see gluster only uses UTC for logs, not changable from my researches.

The last logs are from the FSAL component at FULL_DEBUG configuration.

This is my FSAL config:

EXPORT{
    Export_Id = 1 ;   # Export ID unique to each export
    Path = "/HDD5T";  # Path of the volume to be exported. Eg: "/test_volume"

    FSAL {
        name = GLUSTER;
        hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
        volume = "HDD5T";  # Volume name. Eg: "test_volume"
    }

    Access_type = RW;    # Access permissions
    Squash = No_root_squash; # To enable/disable root squashing
    Disable_ACL = TRUE;  # To enable/disable ACL
    Pseudo = "/HDD5T";  # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
    Protocols = "3","4" ;    # NFS protocols supported
    Transports = "TCP" ; # Transport protocols supported
    SecType = "sys";     # Security flavors supported
}

Is there a problem with the volume/paths definitions? I have a HDD5T gluster volume with no paths (folders) inside, I'd like people to see the root of the volume. The volume is not mounted on the server (so no HDD5T folder with df -h)

Should I define the path as "/" instead of "/HDD5T"? What about the Pseudo?

urbaman avatar May 05 '23 10:05 urbaman