glusterfs icon indicating copy to clipboard operation
glusterfs copied to clipboard

Gluster Filesystem : Build your distributed storage in minutes

Results 351 glusterfs issues
Sort by recently updated
recently updated
newest added

**Description of problem:** no sync mechanics? The mdcache is synced with upcall mechanic. But how sync the inode_table? Even the inode_table in client A and B does not have same...

**Description of problem:** GlusterFS FUSE crash sometime,and can find Assertion failed message in logs, such as: 13732:[2023-01-11 06:04:43.481244] E [inode.c:767:inode_forget_atomic] (-->/opt/lib/glusterfs/8.2/xlator/mount/fuse.so(+0xa073) [0x41f8073] -->/opt/lib/libglusterfs.so.0(inode_forget_with_unref+0x25) [0x4066755] -->/opt/lib/libglusterfs.so.0(+0x366dd) [0x40646dd] ) 0-: Assertion failed:...

glusterfs encounter a SIGSEGV in __gf_free called from glusterfs_volfile_fetch_on The glusterfs(fuse client) is showing a below stacktrace Program terminated with signal 11, Segmentation fault. #0 __gf_free (free_ptr=free_ptr@entry=0x556c7d749040) at mem-pool.c:326 326...

**Description of problem:** The .glusterfs folder is always missing, causes storage service to be unavailable **The exact command to reproduce the issue**: highly concurrent reading, writing and deleting in glusterfs...

**Description of problem:** Lock file on non-glusterfs-volume is taken for ever by glusterfs fuse after forced reboot. **The exact command to reproduce the issue**: Used flock() from PHP on /tmp/test.lock...

There are multiple readdir improvement with this patch for fuse. 1) Set the buffer size 128KB during wind a readdir call by fuse 2) Wind a lookup call for all...

**Description of problem:** Recently upgraded to gluster 11. Ever since, bricks seem to randomly go offline during heavy write operations and/or rebalances. When querying the port, its accessible and available...

Hello. I want to have 3 data replica nodes, plus fourth glusterfs node as arbiter for this three data nodes. Is this possible? Currently I have 3 replica data nodes...

**Description of problem:** multiple nfs clients do create and delete operations in the same directory ,Restart the brick section at this time; directory gfid split-brain: sudo /var/lib/sdsom/venv/bin/salt '*' cmd.run 'getfattr...

**Description of problem:** gf_print_trace repeatly writes "frame: type(0) op(0)" to the logfile, size of which eventually reached 8.7T. Checking and debuging through gdb, I found a stack item obtained from...