mohammaddawoodshaik

Results 15 comments of mohammaddawoodshaik

@leelavg Sorry for the confusion, it was node-plugin pod I have deleted to simulate the scenario - what will happen if FUSE process gets restarted? But the problem I see...

I am having Kadalu-1.1.0 with Replica3 deployed in my cluster. And I have tried to create file, append data, remove file, create the same file with data and append data...

@hickersonj - I have tried your steps, but still not seeing the issue. One difference here is, I am having K8 based env. @rajtupakula - Let try the steps in...

In our case also we are seeing this issue. We are having one volume with 3-brick:3-replica setup with 3-nodes. in one node we have restarted all gluster services. And after...

FYI - https://github.com/kadalu/kadalu/issues/1012 - Root cause of this issue is also the same as this one.

Any update on this issue? This is blocking our promotions. Any help will be appreciated.

Some more info collected from the FUSE client logs: ``` [2023-08-18 05:45:52.101240 +0000] D [rpc-clnt-ping.c:290:rpc_clnt_start_ping] 0-common-storage-pool-client-1: ping timeout is 0, returning [2023-08-18 05:45:52.101751 +0000] D [write-behind.c:1742:wb_process_queue] (--> /opt/lib/libglusterfs.so.0(_gf_log_callingfn+0x182)[0x7fda137c12f2] (--> /opt/lib/glusterfs/2023.04.17/xlator/performance/write-behind.so(+0x7fe9)[0x7fda0dfeffe9]...

From further debugging on the issue found some more info: * When subVol directories are stuck in healing state observed the following. ``` root@maglev-master-10-104-241-73:/data/srv/data/brick/subvol/a1/cb# getfattr -m . -d -e hex...

@leelavg Forgot to mention some more info here. Above info I have shared is from a system where all Server and node-plugin pods are running fine. Despite the fact we...

> @mohammaddawoodshaik thanks for detailed debugging, and logs for the issue. > > just opened [gluster/glusterfs#4224](https://github.com/gluster/glusterfs/pull/4224), which hopefully should fix the issue. > > Lets wait for comments from the...