glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

GlusterFS Brick Error Logging with getxattr failed

Open stc-sonny opened this issue 11 months ago • 1 comments

Description of problem: Error logging related to user_xattr flag.

I wonder why these errors are occurring. In the Rocky 8.10 environment, the following tests showed the same phenomenon.

  1. Change the xfsprogs version and reconfigure the Brick file system. (xfsprog 5.0.0, 4.5.0, 5.13.0)
  2. Restart the process after changing the key ClusterFS volume options. (features.acl: off, features.selinux: off, nfs.acl: disabled)

The exact command to reproduce the issue: Proceed with the normal cluster volume default configuration. Do not specify any other options.

The full output of the command that failed:

``` [2024-12-31 06:03:45.075027 +0000] W [posix-inode-fd-ops.c:3881:posix_getxattr] 0-vol01-posix: Extended attributes not supported (try remounting brick with 'user_xattr' flag) [2024-12-31 06:03:45.075076 +0000] E [MSGID: 113001] [posix-inode-fd-ops.c:3892:posix_getxattr] 0-vol01-posix: getxattr failed on /appdata/brick/ (path: /): system.nfs4_acl [Operation not supported] ```

Expected results:

No Error Logging

Mandatory info: - The output of the gluster volume info command:

Volume Name: vol01
Type: Replicate
Volume ID: f2245f73-dd3b-4d1f-9d49-bf2041bd6baf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.21.181:/appdata/brick
Brick2: 192.168.21.182:/appdata/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
storage.health-check-interval: 0
**- The output of the `gluster volume status` command**:
Status of volume: vol01
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.21.181:/appdata/brick         49152     0          Y       12669
Brick 192.168.21.182:/appdata/brick         49153     0          Y       15483
Self-heal Daemon on localhost               N/A       N/A        Y       12684
Self-heal Daemon on 192.168.21.182          N/A       N/A        Y       15498

Task Status of Volume vol01
------------------------------------------------------------------------------
There are no active volume tasks

- The output of the gluster volume heal command:

# gluster volume heal vol01 info
Brick 192.168.21.181:/appdata/brick
Status: Connected
Number of entries: 0

Brick 192.168.21.182:/appdata/brick
Status: Connected
Number of entries: 0

**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/

**- Is there any crash ? Provide the backtrace and coredump no.

Additional info:

  1. Brick Filesystem mount option /dev/sda3 on /appdata type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota) The default option is in use, and when using the user_xattr and noacl mount options, it is recognized as invalid parameters and is not mounted.

  2. Brick Filesystem xfs_info result

meta-data=/dev/sda3              isize=512    agcount=16, agsize=1638400 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
  1. GlusterFS Version Update. (10.3 -> 9.6)

- The operating system / glusterfs version: Rocky Linux 8.10 / GlusterFS 10.3

Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration

stc-sonny avatar Dec 31 '24 06:12 stc-sonny

@rafikc30 Do you know if this happens with nfs-ganesha?

pranithk avatar Jul 01 '25 01:07 pranithk