GlusterFS Brick Error Logging with getxattr failed
Description of problem: Error logging related to user_xattr flag.
I wonder why these errors are occurring. In the Rocky 8.10 environment, the following tests showed the same phenomenon.
- Change the xfsprogs version and reconfigure the Brick file system. (xfsprog 5.0.0, 4.5.0, 5.13.0)
- Restart the process after changing the key ClusterFS volume options. (features.acl: off, features.selinux: off, nfs.acl: disabled)
The exact command to reproduce the issue: Proceed with the normal cluster volume default configuration. Do not specify any other options.
The full output of the command that failed:
Expected results:
Mandatory info:
- The output of the gluster volume info command:
Volume Name: vol01
Type: Replicate
Volume ID: f2245f73-dd3b-4d1f-9d49-bf2041bd6baf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.21.181:/appdata/brick
Brick2: 192.168.21.182:/appdata/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
storage.health-check-interval: 0
**- The output of the `gluster volume status` command**:
Status of volume: vol01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.21.181:/appdata/brick 49152 0 Y 12669
Brick 192.168.21.182:/appdata/brick 49153 0 Y 15483
Self-heal Daemon on localhost N/A N/A Y 12684
Self-heal Daemon on 192.168.21.182 N/A N/A Y 15498
Task Status of Volume vol01
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the gluster volume heal command:
# gluster volume heal vol01 info
Brick 192.168.21.181:/appdata/brick
Status: Connected
Number of entries: 0
Brick 192.168.21.182:/appdata/brick
Status: Connected
Number of entries: 0
**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump no.
Additional info:
-
Brick Filesystem mount option
/dev/sda3 on /appdata type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota)The default option is in use, and when using the user_xattr and noacl mount options, it is recognized as invalid parameters and is not mounted. -
Brick Filesystem xfs_info result
meta-data=/dev/sda3 isize=512 agcount=16, agsize=1638400 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=26214400, imaxpct=25
= sunit=64 swidth=64 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
- GlusterFS Version Update. (10.3 -> 9.6)
- The operating system / glusterfs version: Rocky Linux 8.10 / GlusterFS 10.3
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
@rafikc30 Do you know if this happens with nfs-ganesha?