glusterfs
glusterfs copied to clipboard
inode resource leak - v11.0 (maybe related to "No space left on device")
Description of problem: I have built up a test environment to check long running tests are OK by our applications, but we have detected "No space left on device" after a while. After investigating the problem I found that the number of available inodes is constantly decreasing after each file creation and deletion. And the same is true for the directory creation and deletion.
I have tested by XFS and EXT4 filesystems as well, the result is the same!
Can you help me if something is set wrong or is it really a bug?
The exact command to reproduce the issue:
Tested on a freshly created volume! Volume is created and started by the following commands:
gluster volume create gfs_test_vol01 replica 3 node-a:/data/glusterfs/test_vol01/brick/brick1 node-b:/data/glusterfs/test_vol01/brick/brick2 node-c:/data/glusterfs/test_vol01/brick/brick3
gluster volume start gfs_test_vol01
After this volume has been mounted to /mnt/gfs_test_vol01 (fstab entry: localhost:/gfs_test_vol01 /mnt/gfs_test_vol01 glusterfs defaults,_netdev,backupvolfile-server=node-b 0 0)
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 284 65252 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 284 65252 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 284 65252 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 284 65252 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 284 65252 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 284 65252 1% /mnt/gfs_test_vol01
Create a file by the mountpoint:
echo "datatext" > testfile01.txt
Avaliable inode numbers on Node-A:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick
localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Delete the file by the mountpoint:
rm -f testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 286 65250 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 286 65250 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 286 65250 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 286 65250 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 286 65250 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 286 65250 1% /mnt/gfs_test_vol01
Create a file by the mountpoint:
echo "datatext" > testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Delete the file by the mountpoint:
rm -f testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Create a file by the mountpoint:
echo "datatext" > testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 289 65247 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 289 65247 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 289 65247 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 289 65247 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 289 65247 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 289 65247 1% /mnt/gfs_test_vol01
Delete the file by the mountpoint:
rm -f testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
At the end the directory is empty:
root@node-a:/mnt/gfs_test_vol01# ls -la /mnt/gfs_test_vol01/ total 8 drwxr-xr-x 4 root root 4096 May 19 12:38 . drwxr-xr-x 4 root root 4096 May 19 12:27 ..
Free inodes before and after the test and it never recovers:
65252 (before) > 65248 (after)
Expected results: Free up all inodes as it needed: Value of after (65248) must be the same as value of before (65252)
Mandatory info:
- The output of the gluster volume info
command:
Volume Name: gfs_test_vol01 Type: Distributed-Replicate Volume ID: f28c482b-d4a1-4ae3-8928-932cd30cc551 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node-a:/data/glusterfs/test_vol01/brick/brick1 Brick2: node-b:/data/glusterfs/test_vol01/brick/brick2 Brick3: node-c:/data/glusterfs/test_vol01/brick/brick3 Options Reconfigured: cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off
- The output of the gluster volume status
command:
root@node-a:~# gluster volume status Status of volume: gfs_test_vol01 Gluster process TCP Port RDMA Port Online Pid
Brick node-a:/data/glusterfs /test_vol01/brick/brick1 53515 0 Y 24913 Brick node-b:/data/glusterfs /test_vol01/brick/brick2 56548 0 Y 5800 Brick node-c:/data/glusterfs /test_vol01/brick/brick3 52789 0 Y 5684 Self-heal Daemon on localhost N/A N/A Y 878 Self-heal Daemon on node-b N/A N/A Y 820 Self-heal Daemon on node-c N/A N/A Y 819
Task Status of Volume gfs_test_vol01
There are no active volume tasks
- The output of the gluster volume heal
command:
Launching heal operation to perform index self heal on volume gfs_test_vol01 has been successful Use heal info commands to check status.
Brick node-a:/data/glusterfs/test_vol01/brick/brick1 Status: Connected Number of entries: 0
Brick node-b:/data/glusterfs/test_vol01/brick/brick2 Status: Connected Number of entries: 0
Brick node-c:/data/glusterfs/test_vol01/brick/brick3 Status: Connected Number of entries: 0
**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/
data-glusterfs-test_vol01-brick-brick1.log glfsheal-gfs_test_vol01.log glusterd.log glustershd.log mnt-gfs_test_vol01.log
- The operating system / glusterfs version:
root@node-a:/mnt/gfs_test_vol01# cat /proc/version Linux version 5.10.0-23-amd64 ([email protected]) (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP Debian 5.10.179-1 (2023-05-12)
root@node-a:~# glusterfs --version glusterfs 11.0 Repository revision: git://git.gluster.org/glusterfs.git