glusterfs
glusterfs copied to clipboard
AddressSanitizer: heap-use-after-free
Description of problem: There is a heap-user-after-free bug in the latest git version 37f6ced36792e42b98160f9cfffdbf307421e155 .
The exact command to reproduce the issue:
Suppose we have two GlusterFS servers and one client whose IPs are 192.168.0.30
, 192.168.0.31
, 192.168.0.33
.
- Start the two servers by executing the following script at the server
192.168.0.30
.
#Start daemons
systemctl restart glusterd
sshpass -p "123456" ssh -o StrictHostKeyChecking=no [email protected] systemctl restart glusterd
#Create a volume
gluster peer probe 192.168.0.31
gluster volume create test_volume 192.168.0.30:/root/glusterfs-server 192.168.0.31:/root/glusterfs-server force
gluster volume start test_volume force
```bash
2. Mount the client, create a directory `testdir` and set an attribute for it.
```bash
mount -t glusterfs 192.168.0.30:/test_volume /root/glusterfs-client/
mkdir /root/glusterfs-client/testdir
setfattr -n user.attr -v val /root/glusterfs-client/testdir
getfattr -d /root/glusterfs-client/testdir
- Kill the GlusterFS daemons
/usr/local/sbin/glusterfsd
and/usr/local/sbin/glusterd
at the second server192.168.0.31
. - Remove the attribute of
testdir
setfattr -x user.attr /root/glusterfs-client/testdir
- The GlusterFS client will crash with the user-after-free bug.
The full output of the command that failed:
=================================================================
==326==ERROR: AddressSanitizer: heap-use-after-free on address 0x62100006d434 at pc 0x7fffeee3b776 bp 0x7ffff00c8610
sp 0x7ffff00c8600
READ of size 4 at 0x62100006d434 thread T6
#0 0x7fffeee3b775 in dht_setxattr_mds_cbk /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3944
#1 0x7fffef034527 in client4_0_removexattr_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:
1061
#2 0x7ffff721ffca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
#3 0x7ffff721ffca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
#4 0x7ffff7219983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521
#5 0x7ffff018a5a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
#6 0x7ffff019ab39 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
#7 0x7ffff019ab39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
#8 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
#9 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
#10 0x7ffff73fa6c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631
#11 0x7ffff73fa6c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742
#12 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477
#13 0x7ffff70e4102 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x122102)
0x62100006d434 is located 1844 bytes inside of 4164-byte region [0x62100006cd00,0x62100006dd44)
freed by thread T6 here:
#0 0x7ffff769a7cf in __interceptor_free (/lib/x86_64-linux-gnu/libasan.so.5+0x10d7cf)
#1 0x7ffff7355e19 in __gf_free /root/glusterfs/libglusterfs/src/mem-pool.c:383
#2 0x7fffeedbbacd in dht_local_wipe /root/glusterfs/xlators/cluster/dht/src/dht-helper.c:805
#3 0x7fffeedbbacd in dht_local_wipe /root/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
#4 0x7fffeeea7312 in dht_setxattr_non_mds_cbk /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3898
#5 0x7fffef034527 in client4_0_removexattr_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:
1061
#6 0x7fffeefe35ac in client_submit_request /root/glusterfs/xlators/protocol/client/src/client.c:288
#7 0x7fffef01b198 in client4_0_removexattr /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4481
#8 0x7fffeefce5da in client_removexattr /root/glusterfs/xlators/protocol/client/src/client.c:1439
#9 0x7fffeee38f1d in dht_setxattr_mds_cbk /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3977
#10 0x7fffef034527 in client4_0_removexattr_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c
:1061
#11 0x7ffff721ffca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
#12 0x7ffff721ffca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
#13 0x7ffff7219983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521
#14 0x7ffff018a5a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
#15 0x7ffff019ab39 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
#16 0x7ffff019ab39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
#17 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
#18 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
#19 0x7ffff73fa6c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631
#20 0x7ffff73fa6c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742
#21 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477
previously allocated by thread T8 here:
#0 0x7ffff769adc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6)
#1 0x7ffff7355226 in __gf_calloc /root/glusterfs/libglusterfs/src/mem-pool.c:177
#2 0x7fffeedc7b19 in dht_local_init /root/glusterfs/xlators/cluster/dht/src/dht-helper.c:815
#3 0x7fffeeebba59 in dht_removexattr /root/glusterfs/xlators/cluster/dht/src/dht-common.c:6142
#4 0x7fffeed70781 in gf_utime_removexattr /root/glusterfs/xlators/features/utime/src/utime-autogen-fops.c:428
#5 0x7ffff7481291 in default_removexattr /root/glusterfs/libglusterfs/src/defaults.c:2816
#6 0x7ffff7481291 in default_removexattr /root/glusterfs/libglusterfs/src/defaults.c:2816
#7 0x7ffff7481291 in default_removexattr /root/glusterfs/libglusterfs/src/defaults.c:2816
#8 0x7fffeecb3437 in mdc_removexattr /root/glusterfs/xlators/performance/md-cache/src/md-cache.c:2738
#9 0x7ffff74df738 in default_removexattr_resume /root/glusterfs/libglusterfs/src/defaults.c:2046
#10 0x7ffff731da15 in call_resume_wind /root/glusterfs/libglusterfs/src/call-stub.c:2087
#11 0x7ffff734d8f4 in call_resume /root/glusterfs/libglusterfs/src/call-stub.c:2390
#12 0x7fffeec608bc in iot_worker /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:227
#13 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477
Thread T6 created by T0 here:
#0 0x7ffff75c7805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
#1 0x7ffff72f8b97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261
#2 0x7ffff730a28d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284
#3 0x7ffff73f8af2 in event_dispatch_epoll /root/glusterfs/libglusterfs/src/event-epoll.c:797
#4 0x7ffff7353f89 in gf_event_dispatch /root/glusterfs/libglusterfs/src/event.c:115
#5 0x7ffff7461b7f in gf_io_main /root/glusterfs/libglusterfs/src/gf-io.c:431
#6 0x7ffff7461b7f in gf_io_run /root/glusterfs/libglusterfs/src/gf-io.c:516
#7 0x55555556c37a in main /root/glusterfs/glusterfsd/src/glusterfsd.c:2774
#8 0x7ffff6fe90b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
Thread T8 created by T7 here:
#0 0x7ffff75c7805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
#1 0x7ffff72f8b97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261
#2 0x7ffff730a28d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284
#3 0x7fffeec5face in __iot_workers_scale /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:830
#4 0x7fffeec67d62 in iot_workers_scale /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:853
#5 0x7fffeec67d62 in init /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:1251
#6 0x7ffff72e5208 in __xlator_init /root/glusterfs/libglusterfs/src/xlator.c:610
#7 0x7ffff72e5208 in xlator_init /root/glusterfs/libglusterfs/src/xlator.c:635
#8 0x7ffff7378672 in glusterfs_graph_init /root/glusterfs/libglusterfs/src/graph.c:474
#9 0x7ffff737971b in glusterfs_graph_activate /root/glusterfs/libglusterfs/src/graph.c:823
#10 0x555555573a4e in glusterfs_process_volfp /root/glusterfs/glusterfsd/src/glusterfsd.c:2493
#11 0x555555584675 in mgmt_getspec_cbk /root/glusterfs/glusterfsd/src/glusterfsd-mgmt.c:2444
#12 0x7ffff721ffca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
#13 0x7ffff721ffca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
#14 0x7ffff7219983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521
#15 0x7ffff018a5a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
#16 0x7ffff019ab39 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
#17 0x7ffff019ab39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
#18 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
#19 0x7ffff019ab39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
#20 0x7ffff73fa6c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631
#21 0x7ffff73fa6c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742
#22 0x7ffff71bf608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477
Thread T7 created by T0 here:
#0 0x7ffff75c7805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
#1 0x7ffff72f8b97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261
#2 0x7ffff730a28d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284
#3 0x7ffff73f8af2 in event_dispatch_epoll /root/glusterfs/libglusterfs/src/event-epoll.c:797
#4 0x7ffff7353f89 in gf_event_dispatch /root/glusterfs/libglusterfs/src/event.c:115
#5 0x7ffff7461b7f in gf_io_main /root/glusterfs/libglusterfs/src/gf-io.c:431
#6 0x7ffff7461b7f in gf_io_run /root/glusterfs/libglusterfs/src/gf-io.c:516
#7 0x55555556c37a in main /root/glusterfs/glusterfsd/src/glusterfsd.c:2774
#8 0x7ffff6fe90b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
SUMMARY: AddressSanitizer: heap-use-after-free /root/glusterfs/xlators/cluster/dht/src/dht-common.c:3944 in dht_setx
attr_mds_cbk
Shadow bytes around the buggy address:
0x0c4280005a30: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005a40: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005a50: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005a60: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005a70: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
=>0x0c4280005a80: fd fd fd fd fd fd[fd]fd fd fd fd fd fd fd fd fd
0x0c4280005a90: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005aa0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005ac0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c4280005ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==326==ABORTING
Expected results: Shouldn't crash.
Mandatory info:
- The output of the gluster volume info
command:
Volume Name: test_volume
Type: Distribute
Volume ID: dc8b32ae-2e0d-4ff9-af1e-bbe3dcf9eb9d
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.0.30:/root/glusterfs-server
Brick2: 192.168.0.31:/root/glusterfs-server
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
- The output of the gluster volume status
command:
Status of volume: test_volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.0.30:/root/glusterfs-server 60519 0 Y 328
Brick 192.168.0.31:/root/glusterfs-server 52119 0 Y 399
Task Status of Volume test_volume
------------------------------------------------------------------------------
There are no active volume tasks
- The output of the gluster volume heal
command:
Launching heal operation to perform index self heal on volume test_volume has been unsuccessful:
Self-heal-daemon is disabled. Heal will not be triggered on volume test_volume
- Provide logs present on following locations of client and server nodes /var/log/glusterfs/root-glusterfs-client.log
[2022-08-22 14:19:40.656407 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-tes
t_volume-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2022-08-22 14:19:40.656570 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-tes
t_volume-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2022-08-22 14:19:40.666962 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test_volume-cli
ent-1: Connected, attached to remote volume [{conn-name=test_volume-client-1}, {remote_subvol=/root/glusterfs-server
}]
[2022-08-22 14:19:40.666962 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test_volume-cli
ent-0: Connected, attached to remote volume [{conn-name=test_volume-client-0}, {remote_subvol=/root/glusterfs-server
}]
[2022-08-22 14:19:40.673626 +0000] I [fuse-bridge.c:5328:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol vers
ions: glusterfs 7.24 kernel 7.34
[2022-08-22 14:19:40.673711 +0000] I [fuse-bridge.c:5960:fuse_graph_sync] 0-fuse: switched to graph 0
[2022-08-22 14:19:40.679061 +0000] I [MSGID: 109060] [dht-layout.c:562:dht_layout_normalize] 0-test_volume-dht: Foun
d anomalies [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {holes=1}, {overlaps=0}]
[2022-08-22 14:20:36.659851 +0000] W [socket.c:751:__socket_rwv] 0-test_volume-client-1: readv on 192.168.0.31:54826
failed (No data available)
[2022-08-22 14:20:36.659985 +0000] I [MSGID: 114018] [client.c:2242:client_rpc_notify] 0-test_volume-client-1: disco
nnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=test_v
olume-client-1}]
[2022-08-22 14:20:53.872093 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test_volume-
client-1: remote operation failed. [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {errno=107}, {error=Trans
port endpoint is not connected}]
[2022-08-22 14:20:53.872291 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test_volume-clie
nt-1: failed to send the fop []
[2022-08-22 14:20:53.874557 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test_volume-
client-1: remote operation failed. [{path=/testdir}, {gfid=9bcc505b-c52c-4f88-925f-62a64d5e432a}, {errno=107}, {erro
r=Transport endpoint is not connected}]
[2022-08-22 14:20:53.874752 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test_volume-clie
nt-1: failed to send the fop []
[2022-08-22 14:20:53.877789 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:1057:client4_0_removexattr_cbk] 0-test_vo
lume-client-1: remote operation failed. [{errno=107}, {error=Transport endpoint is not connected}]
[2022-08-22 14:20:53.877958 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:4485:client4_0_removexattr] 0-test_volume
-client-1: failed to send the fop []
- Is there any crash ? Provide the backtrace and coredump
Additional info:
- The operating system / glusterfs version: Latest version: 37f6ced36792e42b98160f9cfffdbf307421e155
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
What are the compilation flags you have used to compile the code? I have tried to reproduce the issue with default flags. I am not getting any crash.
Hi @mohit84 , thanks for your reply.
The configuration flags are: ./configure --disable-linux-io_uring --enable-asan
Please also set the ASAN_OPTIONS environment as ASAN_OPTIONS="log_path=~/asan-log"
. Once the use-after-free bug is detected, the crash stack trace is written into asan-log
file. Otherwise, there is no crash stack shown in the terminal but only the client daemon crashes and exits.
Hi @mohit84 , thanks for patching it. I'm new to GlusterFS and thus still don't fully understand the internals of this vulnerability. I'll appreciate it if you could answer my following confusion to help me understand the logic behind this bug:
- Why is
client4_0_removexattr_cbk
called three times (once at read and twice when freeing the memory) in the thread T6 shown in the above stack trace? And what's the difference betweendht_setxattr_non_mds_cbk
anddht_setxattr_mds_cbk
or what are their semantics? - Is the memory releasing triggered by the disconnection of the brick?
Thanks in adavance.
@lvtao-sec I have tried to reproduce an issue but I am not able to reproduce it. Would it be possible for you to reproduce an issue after enabling TRACE/DEBUG logs for the client?
Hi @mohit84 , I tried with TRACE and DEBUG logs respective. This use-after-free bug can be reproduced in both settings.
In case there are any mistakes during your reproducing, I repeat some important points here:
- two servers with distributed mode
- set ASAN_OPTIONS="log_path=~/asan-log"
The client DEBUG mode log
+------------------------------------------------------------------------------+
[2023-03-04 12:14:43.934758 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-test-volume-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-03-04 12:14:43.935373 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-test-volume-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-03-04 12:14:43.950719 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test-volume-client-1: Connected, attached to remote volume [{conn-name=test-volume-client-1}, {remote_subvol=/root/glusterfs-server}]
[2023-03-04 12:14:43.950715 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test-volume-client-0: Connected, attached to remote volume [{conn-name=test-volume-client-0}, {remote_subvol=/root/glusterfs-server}]
[2023-03-04 12:14:43.963930 +0000] I [fuse-bridge.c:5328:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.34
[2023-03-04 12:14:43.964105 +0000] I [fuse-bridge.c:5960:fuse_graph_sync] 0-fuse: switched to graph 0
[2023-03-04 12:14:43.971682 +0000] I [MSGID: 109060] [dht-layout.c:562:dht_layout_normalize] 0-test-volume-dht: Found anomalies [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {holes=1}, {overlaps=0}]
[2023-03-04 12:15:33.628886 +0000] I [glusterfsd-mgmt.c:35:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2023-03-04 12:15:33.656170 +0000] I [glusterfsd-mgmt.c:2336:mgmt_getspec_cbk] 0-glusterfs: Received list of available volfile servers: 192.168.0.31:24007
[2023-03-04 12:15:33.656332 +0000] I [MSGID: 101219] [common-utils.c:3069:gf_set_volfile_server_common] 0-gluster: duplicate entry for volfile-server [{errno=17}, {error=File exists}]
[2023-03-04 12:15:33.661104 +0000] I [MSGID: 0] [options.c:1384:xlator_option_reconf_time] 0-test-volume-client-0: option ping-timeout using set value 1
[2023-03-04 12:15:33.661286 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-0: option send-gids using set value true
[2023-03-04 12:15:33.661378 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-0: option strict-locks using set value off
[2023-03-04 12:15:33.661502 +0000] I [MSGID: 0] [options.c:1384:xlator_option_reconf_time] 0-test-volume-client-1: option ping-timeout using set value 1
[2023-03-04 12:15:33.661605 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-1: option send-gids using set value true
[2023-03-04 12:15:33.661656 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-1: option strict-locks using set value off
[2023-03-04 12:15:33.661777 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-dht: option lock-migration using set value off
[2023-03-04 12:15:33.661870 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-dht: option force-migration using set value off
[2023-03-04 12:15:33.662104 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-utime: option noatime using set value on
[2023-03-04 12:15:33.662770 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume: option count-fop-hits using set value off
[2023-03-04 12:15:33.662902 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume: option latency-measurement using set value off
[2023-03-04 12:15:33.663118 +0000] I [io-stats.c:3797:ios_sample_buf_size_configure] 0-test-volume: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0
[2023-03-04 12:15:33.663198 +0000] I [MSGID: 0] [options.c:1371:xlator_option_reconf_str] 0-test-volume: option log-level using set value DEBUG
[2023-03-04 12:15:33.663431 +0000] I [MSGID: 0] [options.c:1375:xlator_option_reconf_int32] 0-test-volume: option threads using set value 16
[2023-03-04 12:15:33.663569 +0000] D [io-stats.c:3939:reconfigure] 0-test-volume: reconfigure returning 0
[2023-03-04 12:15:33.663960 +0000] D [glusterfsd-mgmt.c:2423:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done
[2023-03-04 12:15:59.084513 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:15:59.086609 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.30:58262: ping timer event already removed
[2023-03-04 12:15:59.087127 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:15:59.087382 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:15:59.087465 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-0: Ping latency is 0ms
[2023-03-04 12:15:59.087751 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.31:60825: ping timer event already removed
[2023-03-04 12:15:59.088414 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-1: Ping latency is 0ms
[2023-03-04 12:15:59.089645 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret 0
[2023-03-04 12:15:59.089722 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-1(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:15:59.090222 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:15:59.090639 +0000] D [MSGID: 0] [dht-common.c:3316:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: No gfid-req available
[2023-03-04 12:15:59.090772 +0000] D [MSGID: 0] [dht-common.c:3358:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: Calling fresh lookup on test-volume-client-0
[2023-03-04 12:15:59.092736 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:15:59.092871 +0000] D [MSGID: 0] [dht-common.c:3070:dht_lookup_cbk] 0-test-volume-dht: /testdir: fresh_lookup on test-volume-client-0 returned with op_ret -1 [No such file or directory]
[2023-03-04 12:15:59.092977 +0000] D [MSGID: 0] [dht-common.c:3081:dht_lookup_cbk] 0-test-volume-dht: Entry /testdir missing on subvol test-volume-client-0
[2023-03-04 12:15:59.093087 +0000] D [MSGID: 0] [dht-common.c:2871:dht_lookup_everywhere] 0-test-volume-dht: winding lookup call to 2 subvols
[2023-03-04 12:15:59.094630 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:15:59.095210 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-0
[2023-03-04 12:15:59.095257 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-client-1 returned -1 [No such file or directory]
[2023-03-04 12:15:59.095481 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-1
[2023-03-04 12:15:59.095559 +0000] D [MSGID: 0] [dht-common.c:2381:dht_lookup_everywhere_done] 0-test-volume-dht: STATUS: hashed_subvol test-volume-client-0 cached_subvol null
[2023-03-04 12:15:59.095681 +0000] D [MSGID: 0] [dht-common.c:2436:dht_lookup_everywhere_done] 0-test-volume-dht: There was no cached file and unlink on hashed is not skipped /testdir
[2023-03-04 12:15:59.095794 +0000] D [MSGID: 0] [dht-common.c:2441:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x61c000032898, test-volume-dht returned -1 [No such file or directory]
[2023-03-04 12:15:59.095927 +0000] D [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-utime returned -1 [No such file or directory]
[2023-03-04 12:15:59.096079 +0000] D [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-write-behind returned -1 [No such file or directory]
[2023-03-04 12:15:59.096202 +0000] D [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-quick-read returned -1 [No such file or directory]
[2023-03-04 12:15:59.096333 +0000] D [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-md-cache returned -1 [No such file or directory]
[2023-03-04 12:15:59.096436 +0000] D [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-io-threads returned -1 [No such file or directory]
[2023-03-04 12:15:59.096550 +0000] D [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume returned -1 [No such file or directory]
[2023-03-04 12:15:59.096706 +0000] D [fuse-resolve.c:59:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/testdir: failed to resolve (No such file or directory)
[2023-03-04 12:15:59.096959 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:15:59.097362 +0000] D [MSGID: 0] [dht-common.c:3358:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: Calling fresh lookup on test-volume-client-0
[2023-03-04 12:15:59.099179 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:15:59.099268 +0000] D [MSGID: 0] [dht-common.c:3070:dht_lookup_cbk] 0-test-volume-dht: /testdir: fresh_lookup on test-volume-client-0 returned with op_ret -1 [No such file or directory]
[2023-03-04 12:15:59.099313 +0000] D [MSGID: 0] [dht-common.c:3081:dht_lookup_cbk] 0-test-volume-dht: Entry /testdir missing on subvol test-volume-client-0
[2023-03-04 12:15:59.099359 +0000] D [MSGID: 0] [dht-common.c:2871:dht_lookup_everywhere] 0-test-volume-dht: winding lookup call to 2 subvols
[2023-03-04 12:15:59.100782 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:15:59.100919 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-0
[2023-03-04 12:15:59.101297 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-client-1 returned -1 [No such file or directory]
[2023-03-04 12:15:59.101404 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-1
[2023-03-04 12:15:59.101554 +0000] D [MSGID: 0] [dht-common.c:2381:dht_lookup_everywhere_done] 0-test-volume-dht: STATUS: hashed_subvol test-volume-client-0 cached_subvol null
[2023-03-04 12:15:59.101619 +0000] D [MSGID: 0] [dht-common.c:2436:dht_lookup_everywhere_done] 0-test-volume-dht: There was no cached file and unlink on hashed is not skipped /testdir
[2023-03-04 12:15:59.101680 +0000] D [MSGID: 0] [dht-common.c:2441:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x61c000018098, test-volume-dht returned -1 [No such file or directory]
[2023-03-04 12:15:59.101762 +0000] D [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-utime returned -1 [No such file or directory]
[2023-03-04 12:15:59.101834 +0000] D [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-write-behind returned -1 [No such file or directory]
[2023-03-04 12:15:59.101905 +0000] D [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-quick-read returned -1 [No such file or directory]
[2023-03-04 12:15:59.101984 +0000] D [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-md-cache returned -1 [No such file or directory]
[2023-03-04 12:15:59.102104 +0000] D [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume-io-threads returned -1 [No such file or directory]
[2023-03-04 12:15:59.102174 +0000] D [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000018098, test-volume returned -1 [No such file or directory]
[2023-03-04 12:15:59.102911 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:15:59.103416 +0000] D [MSGID: 0] [dht-common.c:3316:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: No gfid-req available
[2023-03-04 12:15:59.103505 +0000] D [MSGID: 0] [dht-common.c:3358:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: Calling fresh lookup on test-volume-client-0
[2023-03-04 12:15:59.105600 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:15:59.105734 +0000] D [MSGID: 0] [dht-common.c:3070:dht_lookup_cbk] 0-test-volume-dht: /testdir: fresh_lookup on test-volume-client-0 returned with op_ret -1 [No such file or directory]
[2023-03-04 12:15:59.105800 +0000] D [MSGID: 0] [dht-common.c:3081:dht_lookup_cbk] 0-test-volume-dht: Entry /testdir missing on subvol test-volume-client-0
[2023-03-04 12:15:59.105871 +0000] D [MSGID: 0] [dht-common.c:2871:dht_lookup_everywhere] 0-test-volume-dht: winding lookup call to 2 subvols
[2023-03-04 12:15:59.107931 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:15:59.108085 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-0
[2023-03-04 12:15:59.108285 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-1 returned -1 [No such file or directory]
[2023-03-04 12:15:59.108372 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-1
[2023-03-04 12:15:59.108442 +0000] D [MSGID: 0] [dht-common.c:2381:dht_lookup_everywhere_done] 0-test-volume-dht: STATUS: hashed_subvol test-volume-client-0 cached_subvol null
[2023-03-04 12:15:59.108543 +0000] D [MSGID: 0] [dht-common.c:2436:dht_lookup_everywhere_done] 0-test-volume-dht: There was no cached file and unlink on hashed is not skipped /testdir
[2023-03-04 12:15:59.108605 +0000] D [MSGID: 0] [dht-common.c:2441:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x61c000033098, test-volume-dht returned -1 [No such file or directory]
[2023-03-04 12:15:59.108666 +0000] D [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-utime returned -1 [No such file or directory]
[2023-03-04 12:15:59.108736 +0000] D [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-write-behind returned -1 [No such file or directory]
[2023-03-04 12:15:59.108807 +0000] D [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-quick-read returned -1 [No such file or directory]
[2023-03-04 12:15:59.108874 +0000] D [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-md-cache returned -1 [No such file or directory]
[2023-03-04 12:15:59.108941 +0000] D [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-io-threads returned -1 [No such file or directory]
[2023-03-04 12:15:59.109033 +0000] D [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume returned -1 [No such file or directory]
[2023-03-04 12:15:59.109173 +0000] D [fuse-resolve.c:59:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/testdir: failed to resolve (No such file or directory)
[2023-03-04 12:15:59.109323 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: MKDIR scheduled as normal priority fop
[2023-03-04 12:15:59.112288 +0000] D [MSGID: 0] [dht-diskusage.c:82:dht_du_info_cbk] 0-test-volume-dht: subvolume 'test-volume-client-1': avail_percent is: 90.00 and avail_space is: 39740932096 and avail_inodes is: 98.00
[2023-03-04 12:15:59.112928 +0000] D [MSGID: 0] [dht-diskusage.c:82:dht_du_info_cbk] 0-test-volume-dht: subvolume 'test-volume-client-0': avail_percent is: 90.00 and avail_space is: 39740932096 and avail_inodes is: 98.00
[2023-03-04 12:15:59.114840 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.115054 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.122423 +0000] D [MSGID: 0] [dht-selfheal.c:1781:dht_selfheal_layout_new_directory] 0-test-volume-dht: chunk size = 0xffffffff / 84154 = 51036.995211
[2023-03-04 12:15:59.122574 +0000] D [MSGID: 0] [dht-selfheal.c:1818:dht_selfheal_layout_new_directory] 0-test-volume-dht: assigning range size 0x7fffffff to test-volume-client-1
[2023-03-04 12:15:59.122672 +0000] D [MSGID: 0] [dht-selfheal.c:1818:dht_selfheal_layout_new_directory] 0-test-volume-dht: assigning range size 0x7fffffff to test-volume-client-0
[2023-03-04 12:15:59.125100 +0000] D [MSGID: 109036] [dht-common.c:11372:dht_log_new_layout_for_dir_selfheal] 0-test-volume-dht: Setting layout of /testdir with [Subvol_name: test-volume-client-0, Err: -1 , Start: 0x7fffffff, Stop: 0xffffffff, Hash: 0x0], [Subvol_name: test-volume-client-1, Err: -1 , Start: 0x0, Stop: 0x7ffffffe, Hash: 0x0],
[2023-03-04 12:15:59.125269 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.127215 +0000] D [dict.c:2466:dict_get_bin] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0x720d7) [0x7fffeedf80d7] -->/usr/local/lib/libglusterfs.so.0(dict_get_bin+0x2d4) [0x7ffff72d8ea4] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: key dht-get-iatt-in-xattr, pointer type asked, has iatt type [Invalid argument]
[2023-03-04 12:15:59.127418 +0000] D [dict.c:2466:dict_get_bin] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0x720d7) [0x7fffeedf80d7] -->/usr/local/lib/libglusterfs.so.0(dict_get_bin+0x2d4) [0x7ffff72d8ea4] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: key dht-get-iatt-in-xattr, pointer type asked, has iatt type [Invalid argument]
[2023-03-04 12:15:59.127916 +0000] D [dict.c:481:dict_get] (-->/usr/local/lib/glusterfs/11dev/xlator/mount/fuse.so(+0x301c8) [0x7ffff2d731c8] -->/usr/local/lib/libglusterfs.so.0(dict_get+0x100) [0x7ffff72cdfa0] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: !this || key=trusted.glusterfs.namespace [Invalid argument]
[2023-03-04 12:15:59.133708 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:15:59.125552 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.133685 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:15:59.136473 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:15:59.136570 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:15:59.136770 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret 0
[2023-03-04 12:15:59.136828 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-1(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:15:59.137124 +0000] D [fuse-helpers.c:680:fuse_ignore_xattr_set] 0-glusterfs-fuse: allowing setxattr: key [user.attr], client pid [0]
[2023-03-04 12:15:59.137327 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: SETXATTR scheduled as normal priority fop
[2023-03-04 12:15:59.137755 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.140096 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1525:client4_0_xattrop_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 0
[2023-03-04 12:15:59.140237 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'user.attr' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.143197 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:15:59.141687 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'user.attr' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.143189 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:15:59.144626 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1525:client4_0_xattrop_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 0
[2023-03-04 12:16:00.087746 +0000] D [rpc-clnt-ping.c:308:rpc_clnt_start_ping] 0-test-volume-client-0: returning because transport is connected but there are no frames
[2023-03-04 12:16:00.088547 +0000] D [rpc-clnt-ping.c:308:rpc_clnt_start_ping] 0-test-volume-client-1: returning because transport is connected but there are no frames
[2023-03-04 12:16:00.292212 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:16:00.293299 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.30:58262: ping timer event already removed
[2023-03-04 12:16:00.294172 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-0: Ping latency is 0ms
[2023-03-04 12:16:00.294221 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.31:60825: ping timer event already removed
[2023-03-04 12:16:00.294738 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:16:00.294855 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:16:00.295249 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-1: Ping latency is 0ms
[2023-03-04 12:16:00.296212 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret 0
[2023-03-04 12:16:00.296306 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-1(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:16:00.296752 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:16:00.298921 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /testdir: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:16:00.298967 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /testdir: revalidate lookup on test-volume-client-1 returned op_ret 0
[2023-03-04 12:16:00.299148 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /testdir: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-1(gfid is c288e8aa-abb8-432c-b03d-8c6a747083b0)
[2023-03-04 12:16:00.299858 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:16:00.302041 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:16:00.302094 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 16
[2023-03-04 12:16:00.302297 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.302482 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.302773 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:16:00.304340 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:16:00.304549 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.304707 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 16
[2023-03-04 12:16:00.304897 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.305588 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:16:00.307767 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:16:00.307787 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 3
[2023-03-04 12:16:00.307974 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.308321 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.308591 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:16:00.310358 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 3
[2023-03-04 12:16:00.310364 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:16:00.310597 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:00.310746 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:01.294394 +0000] D [rpc-clnt-ping.c:308:rpc_clnt_start_ping] 0-test-volume-client-0: returning because transport is connected but there are no frames
[2023-03-04 12:16:01.295411 +0000] D [rpc-clnt-ping.c:308:rpc_clnt_start_ping] 0-test-volume-client-1: returning because transport is connected but there are no frames
[2023-03-04 12:16:01.885764 +0000] D [socket.c:723:__socket_rwv] 0-test-volume-client-1: EOF on socket 12 (errno:0:Success); returning ENODATA
[2023-03-04 12:16:01.885910 +0000] W [socket.c:751:__socket_rwv] 0-test-volume-client-1: readv on 192.168.0.31:60825 failed (No data available)
[2023-03-04 12:16:01.885973 +0000] D [socket.c:2817:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL)
[2023-03-04 12:16:01.886105 +0000] D [MSGID: 0] [client.c:2235:client_rpc_notify] 0-test-volume-client-1: got RPC_CLNT_DISCONNECT
[2023-03-04 12:16:01.886348 +0000] I [MSGID: 114018] [client.c:2242:client_rpc_notify] 0-test-volume-client-1: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=test-volume-client-1}]
[2023-03-04 12:16:01.887169 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x1e8)[0x7ffff721dd38] (--> /usr/local/lib/libgfrpc.so.0(+0x2daad)[0x7ffff7220aad] ))))) 0-: 192.168.0.31:60825: ping timer event already removed
[2023-03-04 12:16:04.887670 +0000] D [MSGID: 0] [name.c:277:gf_resolve_ip6] 0-resolver: returning ip-192.168.0.31 (port-24007) for hostname: 192.168.0.31 and port: 24007
[2023-03-04 12:16:04.887802 +0000] D [socket.c:3145:socket_fix_ssl_opts] 0-test-volume-client-1: disabling SSL for portmapper connection
[2023-03-04 12:16:04.888556 +0000] D [socket.c:2817:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:6) (non-SSL)
[2023-03-04 12:16:04.888622 +0000] D [MSGID: 0] [client.c:2235:client_rpc_notify] 0-test-volume-client-1: got RPC_CLNT_DISCONNECT
[2023-03-04 12:16:04.889361 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x1e8)[0x7ffff721dd38] (--> /usr/local/lib/libgfrpc.so.0(+0x2daad)[0x7ffff7220aad] ))))) 0-: 192.168.0.31:60825: ping timer event already removed
[2023-03-04 12:16:06.467830 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:16:06.469259 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.30:58262: ping timer event already removed
[2023-03-04 12:16:06.469629 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
[2023-03-04 12:16:06.469866 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test-volume-client-1: remote operation failed. [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {errno=107}, {error=Transport endpoint is not connected}]
[2023-03-04 12:16:06.470100 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-0: Ping latency is 0ms
[2023-03-04 12:16:06.470313 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:16:06.470436 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:16:06.470034 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000037898, test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:16:06.470664 +0000] D [dict.c:1393:dict_get_with_ref] (-->/usr/local/lib/libglusterfs.so.0(dict_get_uint32+0xe5) [0x7ffff72d6355] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0xb9) [0x7ffff72d4569] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL: trusted.glusterfs.dht.commithash [Invalid argument]
[2023-03-04 12:16:06.470731 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret -1 [Transport endpoint is not connected]
[2023-03-04 12:16:06.471176 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test-volume-client-1: failed to send the fop []
[2023-03-04 12:16:06.471569 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:16:06.472297 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
[2023-03-04 12:16:06.472440 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test-volume-client-1: remote operation failed. [{path=/testdir}, {gfid=c288e8aa-abb8-432c-b03d-8c6a747083b0}, {errno=107}, {error=Transport endpoint is not connected}]
[2023-03-04 12:16:06.472518 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000038098, test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:16:06.472646 +0000] D [dict.c:1393:dict_get_with_ref] (-->/usr/local/lib/libglusterfs.so.0(dict_get_uint32+0xe5) [0x7ffff72d6355] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0xb9) [0x7ffff72d4569] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL: trusted.glusterfs.dht.commithash [Invalid argument]
[2023-03-04 12:16:06.472705 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /testdir: revalidate lookup on test-volume-client-1 returned op_ret -1 [Transport endpoint is not connected]
[2023-03-04 12:16:06.472808 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test-volume-client-1: failed to send the fop []
[2023-03-04 12:16:06.473783 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /testdir: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:16:06.474418 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: REMOVEXATTR scheduled as normal priority fop
[2023-03-04 12:16:06.474724 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0x11e556) [0x7fffeeea4556] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:16:06.474836 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:16:06.476143 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1525:client4_0_xattrop_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 0
[2023-03-04 12:16:06.477667 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
[2023-03-04 12:16:06.477822 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:1057:client4_0_removexattr_cbk] 0-test-volume-client-1: remote operation failed. [{errno=107}, {error=Transport endpoint is not connected}]
[2023-03-04 12:16:06.477914 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1061:client4_0_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000038898, test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:16:06.478023 +0000] D [MSGID: 0] [dht-common.c:3843:dht_setxattr_non_mds_cbk] 0-test-volume-dht: subvolume test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:16:06.478195 +0000] D [dict.c:1393:dict_get_with_ref] (-->/usr/local/lib/libglusterfs.so.0(dict_get_iatt+0xe7) [0x7ffff72d9b17] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0xb9) [0x7ffff72d4569] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL: virt-gf-prestat [Invalid argument]
[2023-03-04 12:16:06.478446 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:4485:client4_0_removexattr] 0-test-volume-client-1: failed to send the fop []
Client TRACE MODE log
[2023-03-04 12:18:48.007817 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-test-volume-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-03-04 12:18:48.008378 +0000] I [MSGID: 114057] [client-handshake.c:871:select_server_supported_programs] 0-test-volume-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}]
[2023-03-04 12:18:48.020823 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test-volume-client-0: Connected, attached to remote volume [{conn-name=test-volume-client-0}, {remote_subvol=/root/glusterfs-server}]
[2023-03-04 12:18:48.023940 +0000] I [MSGID: 114046] [client-handshake.c:621:client_setvolume_cbk] 0-test-volume-client-1: Connected, attached to remote volume [{conn-name=test-volume-client-1}, {remote_subvol=/root/glusterfs-server}]
[2023-03-04 12:18:48.034202 +0000] I [fuse-bridge.c:5328:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.34
[2023-03-04 12:18:48.034344 +0000] I [fuse-bridge.c:5960:fuse_graph_sync] 0-fuse: switched to graph 0
[2023-03-04 12:18:48.041798 +0000] I [MSGID: 109060] [dht-layout.c:562:dht_layout_normalize] 0-test-volume-dht: Found anomalies [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {holes=1}, {overlaps=0}]
[2023-03-04 12:19:07.330178 +0000] I [glusterfsd-mgmt.c:35:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2023-03-04 12:19:07.357644 +0000] I [glusterfsd-mgmt.c:2336:mgmt_getspec_cbk] 0-glusterfs: Received list of available volfile servers: 192.168.0.31:24007
[2023-03-04 12:19:07.357819 +0000] I [MSGID: 101219] [common-utils.c:3069:gf_set_volfile_server_common] 0-gluster: duplicate entry for volfile-server [{errno=17}, {error=File exists}]
[2023-03-04 12:19:07.361730 +0000] I [MSGID: 0] [options.c:1384:xlator_option_reconf_time] 0-test-volume-client-0: option ping-timeout using set value 1
[2023-03-04 12:19:07.361963 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-0: option send-gids using set value true
[2023-03-04 12:19:07.362123 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-0: option strict-locks using set value off
[2023-03-04 12:19:07.362247 +0000] I [MSGID: 0] [options.c:1384:xlator_option_reconf_time] 0-test-volume-client-1: option ping-timeout using set value 1
[2023-03-04 12:19:07.362425 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-1: option send-gids using set value true
[2023-03-04 12:19:07.362523 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-client-1: option strict-locks using set value off
[2023-03-04 12:19:07.362654 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-dht: option lock-migration using set value off
[2023-03-04 12:19:07.362796 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-dht: option force-migration using set value off
[2023-03-04 12:19:07.363073 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume-utime: option noatime using set value on
[2023-03-04 12:19:07.363558 +0000] T [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume: option dump-fd-stats using default value off
[2023-03-04 12:19:07.363692 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume: option count-fop-hits using set value off
[2023-03-04 12:19:07.363847 +0000] I [MSGID: 0] [options.c:1380:xlator_option_reconf_bool] 0-test-volume: option latency-measurement using set value off
[2023-03-04 12:19:07.363980 +0000] T [MSGID: 0] [options.c:1375:xlator_option_reconf_int32] 0-test-volume: option ios-dump-interval using default value 0
[2023-03-04 12:19:07.364081 +0000] T [MSGID: 0] [options.c:1375:xlator_option_reconf_int32] 0-test-volume: option ios-sample-interval using default value 0
[2023-03-04 12:19:07.364234 +0000] T [MSGID: 0] [options.c:1371:xlator_option_reconf_str] 0-test-volume: option ios-dump-format using default value json
[2023-03-04 12:19:07.364337 +0000] I [io-stats.c:3797:ios_sample_buf_size_configure] 0-test-volume: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0
[2023-03-04 12:19:07.364421 +0000] T [MSGID: 0] [options.c:1371:xlator_option_reconf_str] 0-test-volume: option sys-log-level using default value CRITICAL
[2023-03-04 12:19:07.364558 +0000] I [MSGID: 0] [options.c:1371:xlator_option_reconf_str] 0-test-volume: option log-level using set value TRACE
[2023-03-04 12:19:07.364687 +0000] T [MSGID: 0] [options.c:1371:xlator_option_reconf_str] 0-test-volume: option logger not set
[2023-03-04 12:19:07.364788 +0000] T [MSGID: 0] [options.c:1371:xlator_option_reconf_str] 0-test-volume: option log-format not set
[2023-03-04 12:19:07.364877 +0000] T [MSGID: 0] [options.c:1374:xlator_option_reconf_uint32] 0-test-volume: option log-buf-size using default value 5
[2023-03-04 12:19:07.364976 +0000] T [MSGID: 0] [options.c:1384:xlator_option_reconf_time] 0-test-volume: option log-flush-timeout using default value 120
[2023-03-04 12:19:07.365093 +0000] I [MSGID: 0] [options.c:1375:xlator_option_reconf_int32] 0-test-volume: option threads using set value 16
[2023-03-04 12:19:07.365189 +0000] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-test-volume: no range check required for 'option threads 16'
[2023-03-04 12:19:07.365254 +0000] D [io-stats.c:3939:reconfigure] 0-test-volume: reconfigure returning 0
[2023-03-04 12:19:07.365542 +0000] D [glusterfsd-mgmt.c:2423:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done
[2023-03-04 12:19:07.365729 +0000] T [socket.c:2791:socket_event_handler] 0-glusterfs: (sock:9) socket_event_poll_in returned 0
[2023-03-04 12:19:51.743639 +0000] T [socket.c:2748:socket_event_handler] 0-glusterfs: client (sock:9) in:1, out:0, err:0
[2023-03-04 12:19:51.743806 +0000] T [socket.c:2774:socket_event_handler] 0-glusterfs: Client socket (9) is already connected
[2023-03-04 12:19:51.743866 +0000] T [socket.c:554:__socket_ssl_readv] 0-glusterfs: ***** reading over non-SSL
[2023-03-04 12:19:51.743936 +0000] T [socket.c:554:__socket_ssl_readv] 0-glusterfs: ***** reading over non-SSL
[2023-03-04 12:19:51.744049 +0000] T [rpc-clnt.c:647:rpc_clnt_handle_cbk] 0-glusterfs: receivd rpc message (XID: 0x2, Ver: 2, Program: 52743234, ProgVers: 1, Proc: 1) from rpc-transport (glusterfs)
[2023-03-04 12:19:51.744128 +0000] I [glusterfsd-mgmt.c:35:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2023-03-04 12:19:51.744249 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 84, rpc hdr: 64
[2023-03-04 12:19:51.744447 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 18, XID: 0x4, Program: GlusterFS Handshake, ProgVers: 2, Proc: 2) to rpc-transport (glusterfs)
[2023-03-04 12:19:51.744528 +0000] D [rpc-clnt-ping.c:290:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning
[2023-03-04 12:19:51.744615 +0000] T [socket.c:2791:socket_event_handler] 0-glusterfs: (sock:9) socket_event_poll_in returned 0
[2023-03-04 12:19:51.769005 +0000] T [socket.c:2748:socket_event_handler] 0-glusterfs: client (sock:9) in:1, out:0, err:0
[2023-03-04 12:19:51.769107 +0000] T [socket.c:2774:socket_event_handler] 0-glusterfs: Client socket (9) is already connected
[2023-03-04 12:19:51.769144 +0000] T [socket.c:554:__socket_ssl_readv] 0-glusterfs: ***** reading over non-SSL
[2023-03-04 12:19:51.769244 +0000] T [socket.c:554:__socket_ssl_readv] 0-glusterfs: ***** reading over non-SSL
[2023-03-04 12:19:51.769380 +0000] T [socket.c:554:__socket_ssl_readv] 0-glusterfs: ***** reading over non-SSL
[2023-03-04 12:19:51.769481 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-glusterfs: received rpc message (RPC XID: 0x4 Program: GlusterFS Handshake, ProgVers: 2, Proc: 2) from rpc-transport (glusterfs)
[2023-03-04 12:19:51.769582 +0000] I [glusterfsd-mgmt.c:2336:mgmt_getspec_cbk] 0-glusterfs: Received list of available volfile servers: 192.168.0.31:24007
[2023-03-04 12:19:51.769643 +0000] I [MSGID: 101219] [common-utils.c:3069:gf_set_volfile_server_common] 0-gluster: duplicate entry for volfile-server [{errno=17}, {error=File exists}]
[2023-03-04 12:19:51.769697 +0000] I [glusterfsd-mgmt.c:2369:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing
[2023-03-04 12:19:51.769765 +0000] T [socket.c:2791:socket_event_handler] 0-glusterfs: (sock:9) socket_event_poll_in returned 0
[2023-03-04 12:20:47.920178 +0000] D [logging.c:1741:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present
[2023-03-04 12:20:47.920438 +0000] D [logging.c:1704:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5
[2023-03-04 12:20:56.509055 +0000] T [MSGID: 0] [fuse-bridge.c:1529:fuse_getattr] 0-stack-trace: stack-address: 0x61c000031098, winding from fuse to meta-autoload
[2023-03-04 12:20:56.509176 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.509282 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.509402 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:20:56.509519 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.509696 +0000] T [MSGID: 0] [md-cache.c:628:mdc_inode_iatt_get] 0-md-cache: iatt cache not valid for (00000000-0000-0000-0000-000000000001)
[2023-03-04 12:20:56.509813 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.509922 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.510027 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.510151 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.510262 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.510496 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.510766 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 540, rpc hdr: 84
[2023-03-04 12:20:56.510910 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 19, XID: 0x12, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.512108 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.30:57763: ping timer event already removed
[2023-03-04 12:20:56.512421 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 0, rpc hdr: 80
[2023-03-04 12:20:56.512605 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 20, XID: 0x13, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.512733 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000031098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.512962 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 540, rpc hdr: 84
[2023-03-04 12:20:56.512983 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.513044 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.513063 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 19, XID: 0x11, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.513088 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.513131 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.513202 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x12 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.513274 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.513406 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:20:56.513528 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:20:56.513635 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.31:51625: ping timer event already removed
[2023-03-04 12:20:56.513667 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.513689 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 0, rpc hdr: 80
[2023-03-04 12:20:56.513708 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.513740 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.513760 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 21, XID: 0x12, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.513776 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.513860 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.513917 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x13 Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.513990 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-0: Ping latency is 1ms
[2023-03-04 12:20:56.514081 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.514208 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.514251 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.514277 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.514337 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.514407 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x12 Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.514432 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-1: Ping latency is 0ms
[2023-03-04 12:20:56.514486 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.514659 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.514694 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.514716 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.514760 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.514811 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x11 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.514870 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-client-1 returned 0
[2023-03-04 12:20:56.514915 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret 0
[2023-03-04 12:20:56.514991 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-1(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:20:56.515081 +0000] T [MSGID: 0] [dht-common.c:1975:dht_revalidate_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-dht returned 0
[2023-03-04 12:20:56.515185 +0000] T [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-utime returned 0
[2023-03-04 12:20:56.515266 +0000] T [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-write-behind returned 0
[2023-03-04 12:20:56.515415 +0000] T [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-quick-read returned 0
[2023-03-04 12:20:56.515725 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26746) [0x7fffeec9b746] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(00000000-0000-0000-0000-000000000001) time:1677932456 generation=0
[2023-03-04 12:20:56.515835 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:20:56.515941 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.515923 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:20:56.515935 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:20:56.516108 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.516094 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:20:56.516101 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:20:56.516235 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.516209 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:20:56.516227 +0000] T [MSGID: 0] [md-cache.c:806:mdc_inode_xatt_set] 0-md-cache: xatt cache set for (00000000-0000-0000-0000-000000000001) time:1677932456
[2023-03-04 12:20:56.516426 +0000] T [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-md-cache returned 0
[2023-03-04 12:20:56.516499 +0000] T [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume-io-threads returned 0
[2023-03-04 12:20:56.516590 +0000] T [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031098, test-volume returned 0
[2023-03-04 12:20:56.516670 +0000] T [fuse-bridge.c:1378:fuse_attr_cbk] 0-glusterfs-fuse: 19: LOOKUP() / => 1
[2023-03-04 12:20:56.516728 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 120/120
[2023-03-04 12:20:56.516948 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.516922 +0000] T [MSGID: 0] [fuse-resolve.c:206:fuse_resolve_gfid] 0-stack-trace: stack-address: 0x61c000031898, winding from fuse to meta-autoload
[2023-03-04 12:20:56.517082 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000031898, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.517195 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000031898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.517444 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.517375 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:20:56.517438 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000031898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.517714 +0000] T [MSGID: 0] [md-cache.c:1326:mdc_lookup] 0-stack-trace: stack-address: 0x61c000031898, test-volume-md-cache returned 0
[2023-03-04 12:20:56.517805 +0000] T [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031898, test-volume-io-threads returned 0
[2023-03-04 12:20:56.517907 +0000] T [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000031898, test-volume returned 0
[2023-03-04 12:20:56.518058 +0000] T [MSGID: 0] [fuse-resolve.c:100:fuse_resolve_entry] 0-stack-trace: stack-address: 0x61c000041098, winding from fuse to meta-autoload
[2023-03-04 12:20:56.518143 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.518282 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.518562 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:20:56.518720 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.518835 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.518906 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.518955 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.519043 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.519119 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.519229 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for testdir
[2023-03-04 12:20:56.519509 +0000] D [MSGID: 0] [dht-common.c:3316:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: No gfid-req available
[2023-03-04 12:20:56.519660 +0000] D [MSGID: 0] [dht-common.c:3358:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: Calling fresh lookup on test-volume-client-0
[2023-03-04 12:20:56.519782 +0000] T [MSGID: 0] [dht-common.c:3361:dht_do_fresh_lookup] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.520041 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:20:56.520168 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 23, XID: 0x14, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.521969 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.522057 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.522083 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.522137 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.522218 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x14 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.522262 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-0: not found on remote node
[2023-03-04 12:20:56.522390 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:20:56.522453 +0000] D [MSGID: 0] [dht-common.c:3070:dht_lookup_cbk] 0-test-volume-dht: /testdir: fresh_lookup on test-volume-client-0 returned with op_ret -1 [No such file or directory]
[2023-03-04 12:20:56.522551 +0000] D [MSGID: 0] [dht-common.c:3081:dht_lookup_cbk] 0-test-volume-dht: Entry /testdir missing on subvol test-volume-client-0
[2023-03-04 12:20:56.522657 +0000] D [MSGID: 0] [dht-common.c:2871:dht_lookup_everywhere] 0-test-volume-dht: winding lookup call to 2 subvols
[2023-03-04 12:20:56.522734 +0000] T [MSGID: 0] [dht-common.c:2874:dht_lookup_everywhere] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.523042 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:20:56.523197 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 23, XID: 0x15, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.523329 +0000] T [MSGID: 0] [dht-common.c:2874:dht_lookup_everywhere] 0-stack-trace: stack-address: 0x61c000041098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.523593 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:20:56.523682 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 23, XID: 0x13, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.523777 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.524556 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.524621 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.524657 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.524699 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.524775 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x15 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.524810 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-0: not found on remote node
[2023-03-04 12:20:56.524851 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:20:56.525214 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.525272 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.525301 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.525406 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.525495 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x13 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.525543 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-1: not found on remote node
[2023-03-04 12:20:56.525592 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-client-1 returned -1 [No such file or directory]
[2023-03-04 12:20:56.525664 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-1
[2023-03-04 12:20:56.525867 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.525928 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-0
[2023-03-04 12:20:56.526005 +0000] D [MSGID: 0] [dht-common.c:2381:dht_lookup_everywhere_done] 0-test-volume-dht: STATUS: hashed_subvol test-volume-client-0 cached_subvol null
[2023-03-04 12:20:56.526147 +0000] D [MSGID: 0] [dht-common.c:2436:dht_lookup_everywhere_done] 0-test-volume-dht: There was no cached file and unlink on hashed is not skipped /testdir
[2023-03-04 12:20:56.526326 +0000] D [MSGID: 0] [dht-common.c:2441:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x61c000041098, test-volume-dht returned -1 [No such file or directory]
[2023-03-04 12:20:56.526421 +0000] D [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-utime returned -1 [No such file or directory]
[2023-03-04 12:20:56.526539 +0000] D [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-write-behind returned -1 [No such file or directory]
[2023-03-04 12:20:56.526658 +0000] D [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-quick-read returned -1 [No such file or directory]
[2023-03-04 12:20:56.526747 +0000] D [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-md-cache returned -1 [No such file or directory]
[2023-03-04 12:20:56.526828 +0000] D [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume-io-threads returned -1 [No such file or directory]
[2023-03-04 12:20:56.526927 +0000] D [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000041098, test-volume returned -1 [No such file or directory]
[2023-03-04 12:20:56.527042 +0000] D [fuse-resolve.c:59:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/testdir: failed to resolve (No such file or directory)
[2023-03-04 12:20:56.527101 +0000] T [fuse-bridge.c:1190:fuse_lookup_resume] 0-glusterfs-fuse: 8: LOOKUP /testdir
[2023-03-04 12:20:56.527273 +0000] T [MSGID: 0] [fuse-bridge.c:1223:fuse_lookup_resume] 0-stack-trace: stack-address: 0x61c000019098, winding from fuse to meta-autoload
[2023-03-04 12:20:56.527409 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.527551 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.527661 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:20:56.527778 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.527916 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.527904 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.528015 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.528058 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.528101 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.528151 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.528220 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for testdir
[2023-03-04 12:20:56.528358 +0000] D [MSGID: 0] [dht-common.c:3358:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: Calling fresh lookup on test-volume-client-0
[2023-03-04 12:20:56.528422 +0000] T [MSGID: 0] [dht-common.c:3361:dht_do_fresh_lookup] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.528640 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 504, rpc hdr: 84
[2023-03-04 12:20:56.528787 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 24, XID: 0x16, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.529854 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.529915 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.529942 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.529982 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.530066 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x16 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.530113 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-0: not found on remote node
[2023-03-04 12:20:56.530185 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:20:56.530247 +0000] D [MSGID: 0] [dht-common.c:3070:dht_lookup_cbk] 0-test-volume-dht: /testdir: fresh_lookup on test-volume-client-0 returned with op_ret -1 [No such file or directory]
[2023-03-04 12:20:56.530323 +0000] D [MSGID: 0] [dht-common.c:3081:dht_lookup_cbk] 0-test-volume-dht: Entry /testdir missing on subvol test-volume-client-0
[2023-03-04 12:20:56.530388 +0000] D [MSGID: 0] [dht-common.c:2871:dht_lookup_everywhere] 0-test-volume-dht: winding lookup call to 2 subvols
[2023-03-04 12:20:56.530431 +0000] T [MSGID: 0] [dht-common.c:2874:dht_lookup_everywhere] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.530694 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 504, rpc hdr: 84
[2023-03-04 12:20:56.530792 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 24, XID: 0x17, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.530855 +0000] T [MSGID: 0] [dht-common.c:2874:dht_lookup_everywhere] 0-stack-trace: stack-address: 0x61c000019098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.531061 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 504, rpc hdr: 84
[2023-03-04 12:20:56.531163 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 24, XID: 0x14, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.531242 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.531741 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.531805 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.531845 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.531897 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.532007 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x17 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.532048 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-0: not found on remote node
[2023-03-04 12:20:56.532131 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:20:56.532187 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-0
[2023-03-04 12:20:56.532253 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.532362 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.532410 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.532436 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.532476 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.532527 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x14 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.532557 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-1: not found on remote node
[2023-03-04 12:20:56.532605 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-client-1 returned -1 [No such file or directory]
[2023-03-04 12:20:56.532679 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-1
[2023-03-04 12:20:56.532732 +0000] D [MSGID: 0] [dht-common.c:2381:dht_lookup_everywhere_done] 0-test-volume-dht: STATUS: hashed_subvol test-volume-client-0 cached_subvol null
[2023-03-04 12:20:56.532807 +0000] D [MSGID: 0] [dht-common.c:2436:dht_lookup_everywhere_done] 0-test-volume-dht: There was no cached file and unlink on hashed is not skipped /testdir
[2023-03-04 12:20:56.532871 +0000] D [MSGID: 0] [dht-common.c:2441:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x61c000019098, test-volume-dht returned -1 [No such file or directory]
[2023-03-04 12:20:56.532942 +0000] D [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-utime returned -1 [No such file or directory]
[2023-03-04 12:20:56.532989 +0000] D [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-write-behind returned -1 [No such file or directory]
[2023-03-04 12:20:56.533054 +0000] D [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-quick-read returned -1 [No such file or directory]
[2023-03-04 12:20:56.533500 +0000] D [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-md-cache returned -1 [No such file or directory]
[2023-03-04 12:20:56.533563 +0000] D [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume-io-threads returned -1 [No such file or directory]
[2023-03-04 12:20:56.533615 +0000] D [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000019098, test-volume returned -1 [No such file or directory]
[2023-03-04 12:20:56.533677 +0000] T [fuse-bridge.c:1078:fuse_entry_cbk] 0-glusterfs-fuse: 24: LOOKUP() /testdir => -1 (No such file or directory)
[2023-03-04 12:20:56.534036 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 16/16
[2023-03-04 12:20:56.534357 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.534561 +0000] T [MSGID: 0] [fuse-resolve.c:100:fuse_resolve_entry] 0-stack-trace: stack-address: 0x61c000032098, winding from fuse to meta-autoload
[2023-03-04 12:20:56.534801 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.534971 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.535088 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:20:56.535458 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.535633 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.535708 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.535830 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.535902 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.535980 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.536118 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for testdir
[2023-03-04 12:20:56.536279 +0000] D [MSGID: 0] [dht-common.c:3316:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: No gfid-req available
[2023-03-04 12:20:56.536416 +0000] D [MSGID: 0] [dht-common.c:3358:dht_do_fresh_lookup] 0-test-volume-dht: /testdir: Calling fresh lookup on test-volume-client-0
[2023-03-04 12:20:56.536489 +0000] T [MSGID: 0] [dht-common.c:3361:dht_do_fresh_lookup] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.536856 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:20:56.537111 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 25, XID: 0x18, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.540979 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.541107 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.541145 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.541243 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.541389 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x18 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.541458 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-0: not found on remote node
[2023-03-04 12:20:56.541535 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:20:56.541632 +0000] D [MSGID: 0] [dht-common.c:3070:dht_lookup_cbk] 0-test-volume-dht: /testdir: fresh_lookup on test-volume-client-0 returned with op_ret -1 [No such file or directory]
[2023-03-04 12:20:56.541706 +0000] D [MSGID: 0] [dht-common.c:3081:dht_lookup_cbk] 0-test-volume-dht: Entry /testdir missing on subvol test-volume-client-0
[2023-03-04 12:20:56.541782 +0000] D [MSGID: 0] [dht-common.c:2871:dht_lookup_everywhere] 0-test-volume-dht: winding lookup call to 2 subvols
[2023-03-04 12:20:56.541895 +0000] T [MSGID: 0] [dht-common.c:2874:dht_lookup_everywhere] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.542182 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:20:56.542577 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 25, XID: 0x19, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.542667 +0000] T [MSGID: 0] [dht-common.c:2874:dht_lookup_everywhere] 0-stack-trace: stack-address: 0x61c000032098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.542902 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:20:56.543027 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 25, XID: 0x15, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.543138 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.544118 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.544196 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.544234 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.544335 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.544450 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x19 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.544506 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-0: not found on remote node
[2023-03-04 12:20:56.544577 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-client-0 returned -1 [No such file or directory]
[2023-03-04 12:20:56.544657 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-0
[2023-03-04 12:20:56.544812 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.544860 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.544924 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.544960 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.545010 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.545103 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x15 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.545164 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2565:client4_0_lookup_cbk] 0-test-volume-client-1: not found on remote node
[2023-03-04 12:20:56.545229 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-client-1 returned -1 [No such file or directory]
[2023-03-04 12:20:56.545297 +0000] D [MSGID: 0] [dht-common.c:2684:dht_lookup_everywhere_cbk] 0-test-volume-dht: returned with op_ret -1 and op_errno 2 (/testdir) from subvol test-volume-client-1
[2023-03-04 12:20:56.545414 +0000] D [MSGID: 0] [dht-common.c:2381:dht_lookup_everywhere_done] 0-test-volume-dht: STATUS: hashed_subvol test-volume-client-0 cached_subvol null
[2023-03-04 12:20:56.545536 +0000] D [MSGID: 0] [dht-common.c:2436:dht_lookup_everywhere_done] 0-test-volume-dht: There was no cached file and unlink on hashed is not skipped /testdir
[2023-03-04 12:20:56.545594 +0000] D [MSGID: 0] [dht-common.c:2441:dht_lookup_everywhere_done] 0-stack-trace: stack-address: 0x61c000032098, test-volume-dht returned -1 [No such file or directory]
[2023-03-04 12:20:56.545662 +0000] D [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-utime returned -1 [No such file or directory]
[2023-03-04 12:20:56.545729 +0000] D [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-write-behind returned -1 [No such file or directory]
[2023-03-04 12:20:56.545825 +0000] D [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-quick-read returned -1 [No such file or directory]
[2023-03-04 12:20:56.545895 +0000] D [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-md-cache returned -1 [No such file or directory]
[2023-03-04 12:20:56.545983 +0000] D [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume-io-threads returned -1 [No such file or directory]
[2023-03-04 12:20:56.546065 +0000] D [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032098, test-volume returned -1 [No such file or directory]
[2023-03-04 12:20:56.546182 +0000] D [fuse-resolve.c:59:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/testdir: failed to resolve (No such file or directory)
[2023-03-04 12:20:56.546284 +0000] T [fuse-bridge.c:2342:fuse_mkdir_resume] 0-glusterfs-fuse: 10: MKDIR /testdir
[2023-03-04 12:20:56.546422 +0000] T [MSGID: 0] [fuse-bridge.c:2345:fuse_mkdir_resume] 0-stack-trace: stack-address: 0x61c000027898, winding from fuse to meta-autoload
[2023-03-04 12:20:56.546560 +0000] T [MSGID: 0] [defaults.c:2692:default_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.546668 +0000] T [MSGID: 0] [io-stats.c:2814:io_stats_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.546804 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: MKDIR scheduled as normal priority fop
[2023-03-04 12:20:56.547112 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.547686 +0000] T [MSGID: 0] [defaults.c:1946:default_mkdir_resume] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.547824 +0000] T [MSGID: 0] [md-cache.c:1675:mdc_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.547907 +0000] T [MSGID: 0] [defaults.c:2692:default_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.547970 +0000] T [MSGID: 0] [defaults.c:2692:default_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.548031 +0000] T [MSGID: 0] [defaults.c:2692:default_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.548117 +0000] T [MSGID: 0] [utime-autogen-fops.c:345:gf_utime_mkdir] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.548264 +0000] T [MSGID: 0] [dht-diskusage.c:191:dht_get_du_info] 0-stack-trace: stack-address: 0x61c000042898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.548430 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 64, rpc hdr: 84
[2023-03-04 12:20:56.548640 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x1a, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.548767 +0000] T [MSGID: 0] [dht-diskusage.c:191:dht_get_du_info] 0-stack-trace: stack-address: 0x61c000042898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.548879 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 64, rpc hdr: 84
[2023-03-04 12:20:56.548994 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x16, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.549083 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for testdir
[2023-03-04 12:20:56.549401 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.549223 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for testdir
[2023-03-04 12:20:56.549387 +0000] T [MSGID: 0] [dht-lock.c:1014:dht_blocking_inodelk_rec] 0-stack-trace: stack-address: 0x61c000043098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.549662 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 88
[2023-03-04 12:20:56.549794 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x1b, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.550016 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.550067 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.550115 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.550174 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.550266 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.550333 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.550386 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.550436 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.550525 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x16 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.550572 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:662:client4_0_statfs_cbk] 0-stack-trace: stack-address: 0x61c000042898, test-volume-client-1 returned 0
[2023-03-04 12:20:56.550694 +0000] D [MSGID: 0] [dht-diskusage.c:82:dht_du_info_cbk] 0-test-volume-dht: subvolume 'test-volume-client-1': avail_percent is: 90.00 and avail_space is: 39740932096 and avail_inodes is: 98.00
[2023-03-04 12:20:56.550793 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.550864 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x1a Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.550913 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:662:client4_0_statfs_cbk] 0-stack-trace: stack-address: 0x61c000042898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.550980 +0000] D [MSGID: 0] [dht-diskusage.c:82:dht_du_info_cbk] 0-test-volume-dht: subvolume 'test-volume-client-0': avail_percent is: 90.00 and avail_space is: 39740919808 and avail_inodes is: 98.00
[2023-03-04 12:20:56.551152 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.552291 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.552395 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.552433 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.552493 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.552630 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x1b Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.552683 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1329:client4_0_inodelk_cbk] 0-stack-trace: stack-address: 0x61c000043098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.552849 +0000] T [MSGID: 0] [dht-lock.c:581:dht_blocking_entrylk_rec] 0-stack-trace: stack-address: 0x61c00001a898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.553379 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 76, rpc hdr: 88
[2023-03-04 12:20:56.553596 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x1c, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 31) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.553769 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.554542 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.554607 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.554645 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.554698 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.554784 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x1c Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 31) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.554827 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1423:client4_0_entrylk_cbk] 0-stack-trace: stack-address: 0x61c00001a898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.555290 +0000] T [MSGID: 0] [dht-common.c:9612:dht_mkdir_guard_parent_layout_cbk] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.555518 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.555650 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.555780 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 244, rpc hdr: 84
[2023-03-04 12:20:56.555912 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x1d, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 4) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.556053 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.559273 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.559394 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.559486 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.559569 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.559667 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x1d Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 4) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.559726 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:284:client4_0_mkdir_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.559884 +0000] T [MSGID: 0] [dht-layout.c:327:dht_layout_merge] 0-test-volume-dht: Missing disk layout on test-volume-client-0. err = -1
[2023-03-04 12:20:56.560008 +0000] T [MSGID: 0] [dht-common.c:9558:dht_mkdir_hashed_cbk] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.560175 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 84, rpc hdr: 84
[2023-03-04 12:20:56.560341 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x17, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 4) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.560544 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.563722 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.563796 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.563833 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.563888 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.563979 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x17 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 4) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.564033 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:284:client4_0_mkdir_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-client-1 returned 0
[2023-03-04 12:20:56.564108 +0000] T [MSGID: 0] [dht-layout.c:327:dht_layout_merge] 0-test-volume-dht: Missing disk layout on test-volume-client-1. err = -1
[2023-03-04 12:20:56.564364 +0000] T [MSGID: 0] [dht-lock.c:407:dht_unlock_entrylk] 0-stack-trace: stack-address: 0x61c00001b898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.564533 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 76, rpc hdr: 88
[2023-03-04 12:20:56.564677 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x1e, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 31) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.564835 +0000] T [MSGID: 0] [dht-lock.c:791:dht_unlock_inodelk] 0-stack-trace: stack-address: 0x61c00001c898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.564981 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 88
[2023-03-04 12:20:56.565101 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x1f, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.565207 +0000] D [MSGID: 0] [dht-selfheal.c:1781:dht_selfheal_layout_new_directory] 0-test-volume-dht: chunk size = 0xffffffff / 84154 = 51036.995211
[2023-03-04 12:20:56.565323 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for /testdir
[2023-03-04 12:20:56.565445 +0000] D [MSGID: 0] [dht-selfheal.c:1818:dht_selfheal_layout_new_directory] 0-test-volume-dht: assigning range size 0x7fffffff to test-volume-client-1
[2023-03-04 12:20:56.565631 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.565675 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.565712 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.565763 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.565848 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x1e Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 31) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.565890 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1423:client4_0_entrylk_cbk] 0-stack-trace: stack-address: 0x61c00001b898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.566056 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.566101 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.566136 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.566171 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.566219 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.566345 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x1f Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.565552 +0000] T [MSGID: 0] [dht-selfheal.c:1820:dht_selfheal_layout_new_directory] 0-test-volume-dht: gave fix: 0x0 - 0x7ffffffe, with commit-hash 0x0 on test-volume-client-1 for /testdir
[2023-03-04 12:20:56.566408 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1329:client4_0_inodelk_cbk] 0-stack-trace: stack-address: 0x61c00001c898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.566520 +0000] D [MSGID: 0] [dht-selfheal.c:1818:dht_selfheal_layout_new_directory] 0-test-volume-dht: assigning range size 0x7fffffff to test-volume-client-0
[2023-03-04 12:20:56.566574 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.566623 +0000] T [MSGID: 0] [dht-selfheal.c:1820:dht_selfheal_layout_new_directory] 0-test-volume-dht: gave fix: 0x7fffffff - 0xfffffffd, with commit-hash 0x0 on test-volume-client-0 for /testdir
[2023-03-04 12:20:56.566728 +0000] T [MSGID: 0] [dht-lock.c:1014:dht_blocking_inodelk_rec] 0-stack-trace: stack-address: 0x61c00001d098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.566834 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 88
[2023-03-04 12:20:56.566933 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x20, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.567042 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.567562 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.567603 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.567628 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.567662 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.567718 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x20 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.567784 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1329:client4_0_inodelk_cbk] 0-stack-trace: stack-address: 0x61c00001d098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.567851 +0000] T [MSGID: 0] [dht-selfheal.c:285:dht_refresh_layout] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.567995 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 100, rpc hdr: 84
[2023-03-04 12:20:56.568075 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x21, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.568134 +0000] T [MSGID: 0] [dht-selfheal.c:285:dht_refresh_layout] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.568234 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 100, rpc hdr: 84
[2023-03-04 12:20:56.568364 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x18, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.568481 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.569265 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.569334 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.569424 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.569480 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.569443 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.569524 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.569575 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.569624 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.569669 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x21 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.569707 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x18 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.569743 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.569761 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-client-1 returned 0
[2023-03-04 12:20:56.569794 +0000] T [MSGID: 0] [dht-layout.c:327:dht_layout_merge] 0-test-volume-dht: Missing disk layout on test-volume-client-0. err = -1
[2023-03-04 12:20:56.569912 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.569911 +0000] T [MSGID: 0] [dht-layout.c:327:dht_layout_merge] 0-test-volume-dht: Missing disk layout on test-volume-client-1. err = -1
[2023-03-04 12:20:56.569992 +0000] T [MSGID: 0] [dht-selfheal.c:889:dht_selfheal_dir_xattr] 0-test-volume-dht: 2 subvolumes missing xattr for /testdir
[2023-03-04 12:20:56.570096 +0000] D [MSGID: 109036] [dht-common.c:11372:dht_log_new_layout_for_dir_selfheal] 0-test-volume-dht: Setting layout of /testdir with [Subvol_name: test-volume-client-0, Err: -1 , Start: 0x7fffffff, Stop: 0xffffffff, Hash: 0x0], [Subvol_name: test-volume-client-1, Err: -1 , Start: 0x0, Stop: 0x7ffffffe, Hash: 0x0],
[2023-03-04 12:20:56.570266 +0000] T [MSGID: 0] [dht-selfheal.c:755:dht_selfheal_dir_xattr_persubvol] 0-test-volume-dht: setting hash range 0x7fffffff - 0xffffffff (type 0) on subvolume test-volume-client-0 for /testdir
[2023-03-04 12:20:56.570389 +0000] T [MSGID: 0] [dht-selfheal.c:784:dht_selfheal_dir_xattr_persubvol] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.570490 +0000] T [MSGID: 0] [client.c:1104:is_client_rpc_init_command] 0-test-volume-client-0: key trusted.glusterfs.client-connect not present
[2023-03-04 12:20:56.570581 +0000] T [MSGID: 0] [client.c:1122:is_client_rpc_destroy_command] 0-test-volume-client-0: name is !replace-brick
[2023-03-04 12:20:56.570653 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.570809 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 176, rpc hdr: 84
[2023-03-04 12:20:56.570936 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x22, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.571080 +0000] T [MSGID: 0] [dht-selfheal.c:755:dht_selfheal_dir_xattr_persubvol] 0-test-volume-dht: setting hash range 0x0 - 0x7ffffffe (type 0) on subvolume test-volume-client-1 for /testdir
[2023-03-04 12:20:56.571159 +0000] T [MSGID: 0] [dht-selfheal.c:784:dht_selfheal_dir_xattr_persubvol] 0-stack-trace: stack-address: 0x61c000027898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.571224 +0000] T [MSGID: 0] [client.c:1104:is_client_rpc_init_command] 0-test-volume-client-1: key trusted.glusterfs.client-connect not present
[2023-03-04 12:20:56.571285 +0000] T [MSGID: 0] [client.c:1122:is_client_rpc_destroy_command] 0-test-volume-client-1: name is !replace-brick
[2023-03-04 12:20:56.571378 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.571488 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 176, rpc hdr: 84
[2023-03-04 12:20:56.571586 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x19, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.571714 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.572073 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.572138 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.572175 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.572225 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.572325 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x22 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.572462 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:865:client4_0_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.572594 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.572657 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.572668 +0000] D [dict.c:2466:dict_get_bin] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0x720d7) [0x7fffeedf80d7] -->/usr/local/lib/libglusterfs.so.0(dict_get_bin+0x2d4) [0x7ffff72d8ea4] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: key dht-get-iatt-in-xattr, pointer type asked, has iatt type [Invalid argument]
[2023-03-04 12:20:56.572697 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.572821 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.572832 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.572980 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x19 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.573080 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:865:client4_0_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-client-1 returned 0
[2023-03-04 12:20:56.573256 +0000] D [dict.c:2466:dict_get_bin] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0x720d7) [0x7fffeedf80d7] -->/usr/local/lib/libglusterfs.so.0(dict_get_bin+0x2d4) [0x7ffff72d8ea4] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: key dht-get-iatt-in-xattr, pointer type asked, has iatt type [Invalid argument]
[2023-03-04 12:20:56.573398 +0000] T [MSGID: 0] [dht-lock.c:791:dht_unlock_inodelk] 0-stack-trace: stack-address: 0x61c00001e098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.573593 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 88
[2023-03-04 12:20:56.573721 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 26, XID: 0x23, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.573834 +0000] T [MSGID: 0] [dht-common.c:9241:dht_mkdir_selfheal_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-dht returned 0
[2023-03-04 12:20:56.573957 +0000] T [MSGID: 0] [utime-autogen-fops.c:330:gf_utime_mkdir_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-utime returned 0
[2023-03-04 12:20:56.574102 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x294f3) [0x7fffeec9e4f3] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(00000000-0000-0000-0000-000000000001) time:1677932456 generation=0
[2023-03-04 12:20:56.574205 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x2955c) [0x7fffeec9e55c] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(304f3160-81b1-42f1-9b61-61b86af273ec) time:1677932456 generation=0
[2023-03-04 12:20:56.574265 +0000] T [MSGID: 0] [md-cache.c:1658:mdc_mkdir_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-md-cache returned 0
[2023-03-04 12:20:56.574396 +0000] T [MSGID: 0] [defaults.c:1312:default_mkdir_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume-io-threads returned 0
[2023-03-04 12:20:56.574524 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.574610 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.574648 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.574698 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.574780 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x23 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 29) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.574857 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1329:client4_0_inodelk_cbk] 0-stack-trace: stack-address: 0x61c00001e098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.574991 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.575050 +0000] T [MSGID: 0] [io-stats.c:2348:io_stats_mkdir_cbk] 0-stack-trace: stack-address: 0x61c000027898, test-volume returned 0
[2023-03-04 12:20:56.575146 +0000] T [fuse-bridge.c:1034:fuse_entry_cbk] 0-glusterfs-fuse: 26: MKDIR() /testdir => 11196337593315914732
[2023-03-04 12:20:56.575251 +0000] D [dict.c:481:dict_get] (-->/usr/local/lib/glusterfs/11dev/xlator/mount/fuse.so(+0x301c8) [0x7ffff2d731c8] -->/usr/local/lib/libglusterfs.so.0(dict_get+0x100) [0x7ffff72cdfa0] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: !this || key=trusted.glusterfs.namespace [Invalid argument]
[2023-03-04 12:20:56.575352 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 144/144
[2023-03-04 12:20:56.575604 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.580370 +0000] T [MSGID: 0] [fuse-bridge.c:1529:fuse_getattr] 0-stack-trace: stack-address: 0x61c000032898, winding from fuse to meta-autoload
[2023-03-04 12:20:56.580477 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.580567 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.580654 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:20:56.580770 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.580906 +0000] T [MSGID: 0] [md-cache.c:890:mdc_inode_xatt_get] 0-md-cache: xattr not present (00000000-0000-0000-0000-000000000001)
[2023-03-04 12:20:56.580992 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key gfid-req doesn't satisfy caching requirements
[2023-03-04 12:20:56.581071 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.581040 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key gfid-req doesn't satisfy caching requirements
[2023-03-04 12:20:56.581066 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.581265 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.581352 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.581460 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.581552 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.581764 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.581984 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 540, rpc hdr: 84
[2023-03-04 12:20:56.582168 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 27, XID: 0x24, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.582232 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000032898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.582515 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 540, rpc hdr: 84
[2023-03-04 12:20:56.582644 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 27, XID: 0x1a, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.583562 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.583625 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.583661 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.583714 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.583806 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x24 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.583978 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-client-0 returned 0
[2023-03-04 12:20:56.584001 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.584103 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.584108 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:20:56.584143 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.584226 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.584208 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:20:56.584330 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x1a Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.584364 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.584468 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-client-1 returned 0
[2023-03-04 12:20:56.584545 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret 0
[2023-03-04 12:20:56.584616 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-1(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:20:56.584728 +0000] T [MSGID: 0] [dht-common.c:1975:dht_revalidate_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-dht returned 0
[2023-03-04 12:20:56.584805 +0000] T [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-utime returned 0
[2023-03-04 12:20:56.584872 +0000] T [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-write-behind returned 0
[2023-03-04 12:20:56.584979 +0000] T [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-quick-read returned 0
[2023-03-04 12:20:56.585128 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26746) [0x7fffeec9b746] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(00000000-0000-0000-0000-000000000001) time:1677932456 generation=0
[2023-03-04 12:20:56.585209 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:20:56.585291 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.585273 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:20:56.585284 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:20:56.585468 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.585451 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:20:56.585463 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:20:56.585604 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.585591 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:20:56.585599 +0000] T [MSGID: 0] [md-cache.c:806:mdc_inode_xatt_set] 0-md-cache: xatt cache set for (00000000-0000-0000-0000-000000000001) time:1677932456
[2023-03-04 12:20:56.585727 +0000] T [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-md-cache returned 0
[2023-03-04 12:20:56.585788 +0000] T [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume-io-threads returned 0
[2023-03-04 12:20:56.585851 +0000] T [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000032898, test-volume returned 0
[2023-03-04 12:20:56.585933 +0000] T [fuse-bridge.c:1378:fuse_attr_cbk] 0-glusterfs-fuse: 27: LOOKUP() / => 1
[2023-03-04 12:20:56.585997 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 120/120
[2023-03-04 12:20:56.586096 +0000] D [fuse-helpers.c:680:fuse_ignore_xattr_set] 0-glusterfs-fuse: allowing setxattr: key [user.attr], client pid [0]
[2023-03-04 12:20:56.586206 +0000] T [fuse-bridge.c:4118:fuse_setxattr_resume] 0-glusterfs-fuse: 14: SETXATTR /testdir/106721347494936 (user.attr)
[2023-03-04 12:20:56.586258 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.586322 +0000] T [MSGID: 0] [fuse-bridge.c:4122:fuse_setxattr_resume] 0-stack-trace: stack-address: 0x61c000033098, winding from fuse to meta-autoload
[2023-03-04 12:20:56.586428 +0000] T [MSGID: 0] [defaults.c:2443:default_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from meta-autoload to test-volume
[2023-03-04 12:20:56.586546 +0000] T [MSGID: 0] [io-stats.c:3264:io_stats_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:56.586655 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: SETXATTR scheduled as normal priority fop
[2023-03-04 12:20:56.586822 +0000] T [MSGID: 0] [defaults.c:1745:default_setxattr_resume] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:56.586924 +0000] T [MSGID: 0] [md-cache.c:2403:mdc_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:56.586987 +0000] T [MSGID: 0] [defaults.c:2443:default_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:56.587053 +0000] T [MSGID: 0] [defaults.c:2443:default_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:56.587111 +0000] T [MSGID: 0] [defaults.c:2443:default_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:56.587178 +0000] T [MSGID: 0] [defaults.c:2443:default_setxattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:56.587403 +0000] T [MSGID: 0] [dht-common.c:5587:dht_dir_common_set_remove_xattr] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.587531 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.587634 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 84
[2023-03-04 12:20:56.587740 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 28, XID: 0x25, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 33) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.589631 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.589712 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.589749 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.589805 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.589893 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x25 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 33) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.589963 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1525:client4_0_xattrop_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 0
[2023-03-04 12:20:56.590070 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1530:client4_0_xattrop_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.590171 +0000] T [MSGID: 0] [dht-common.c:4040:dht_xattrop_mds_cbk] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.590245 +0000] T [MSGID: 0] [client.c:1104:is_client_rpc_init_command] 0-test-volume-client-0: key trusted.glusterfs.client-connect not present
[2023-03-04 12:20:56.590398 +0000] T [MSGID: 0] [client.c:1122:is_client_rpc_destroy_command] 0-test-volume-client-0: name is !replace-brick
[2023-03-04 12:20:56.590509 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'user.attr' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.590690 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 72, rpc hdr: 84
[2023-03-04 12:20:56.590877 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 28, XID: 0x26, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.591045 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.592194 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.592287 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.592379 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.592456 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.592592 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x26 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.592735 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:865:client4_0_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.592877 +0000] T [MSGID: 0] [dht-common.c:3965:dht_setxattr_mds_cbk] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:56.593021 +0000] T [MSGID: 0] [client.c:1104:is_client_rpc_init_command] 0-test-volume-client-1: key trusted.glusterfs.client-connect not present
[2023-03-04 12:20:56.593116 +0000] T [MSGID: 0] [client.c:1122:is_client_rpc_destroy_command] 0-test-volume-client-1: name is !replace-brick
[2023-03-04 12:20:56.593216 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'user.attr' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.593443 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 72, rpc hdr: 84
[2023-03-04 12:20:56.593730 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 28, XID: 0x1b, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.593838 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:56.595776 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:56.595852 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:56.595888 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.595963 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:56.596057 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x1b Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 17) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:56.596175 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:865:client4_0_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-1 returned 0
[2023-03-04 12:20:56.596265 +0000] T [MSGID: 0] [dht-common.c:3870:dht_setxattr_non_mds_cbk] 0-stack-trace: stack-address: 0x61c000033098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:56.596471 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:20:56.596601 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 84
[2023-03-04 12:20:56.596725 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 28, XID: 0x27, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 33) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.596857 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:56.598060 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:56.598156 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:56.598191 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.598241 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:56.598332 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x27 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 33) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:56.598395 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1525:client4_0_xattrop_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 0
[2023-03-04 12:20:56.598453 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1530:client4_0_xattrop_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-client-0 returned 0
[2023-03-04 12:20:56.598518 +0000] T [MSGID: 0] [dht-common.c:3793:dht_common_mds_xattrop_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-dht returned 0
[2023-03-04 12:20:56.598641 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:20:56.598812 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x393c7) [0x7fffeecae3c7] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(304f3160-81b1-42f1-9b61-61b86af273ec) time:1677932456 generation=0
[2023-03-04 12:20:56.598878 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:56.598726 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:20:56.598862 +0000] T [MSGID: 0] [md-cache.c:2386:mdc_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-md-cache returned 0
[2023-03-04 12:20:56.599756 +0000] T [MSGID: 0] [defaults.c:1114:default_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume-io-threads returned 0
[2023-03-04 12:20:56.599890 +0000] T [MSGID: 0] [io-stats.c:2434:io_stats_setxattr_cbk] 0-stack-trace: stack-address: 0x61c000033098, test-volume returned 0
[2023-03-04 12:20:56.599973 +0000] T [fuse-bridge.c:1988:fuse_err_cbk] 0-glusterfs-fuse: 28: SETXATTR() /testdir => 0
[2023-03-04 12:20:56.600048 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 16/16
[2023-03-04 12:20:56.600242 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:57.032946 +0000] T [fuse-bridge.c:1481:fuse_getattr_resume] 0-glusterfs-fuse: 16: GETATTR 106721347494936 (/testdir)
[2023-03-04 12:20:57.033266 +0000] T [MSGID: 0] [fuse-bridge.c:1485:fuse_getattr_resume] 0-stack-trace: stack-address: 0x61c000033898, winding from fuse to meta-autoload
[2023-03-04 12:20:57.033400 +0000] T [MSGID: 0] [defaults.c:3010:default_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from meta-autoload to test-volume
[2023-03-04 12:20:57.033497 +0000] T [MSGID: 0] [io-stats.c:2778:io_stats_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:57.033581 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: STAT scheduled as fast priority fop
[2023-03-04 12:20:57.033719 +0000] T [MSGID: 0] [defaults.c:2204:default_stat_resume] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:57.033869 +0000] T [MSGID: 0] [md-cache.c:628:mdc_inode_iatt_get] 0-md-cache: iatt cache not valid for (304f3160-81b1-42f1-9b61-61b86af273ec)
[2023-03-04 12:20:57.034003 +0000] T [MSGID: 0] [md-cache.c:1412:mdc_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:57.034131 +0000] T [MSGID: 0] [defaults.c:3010:default_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:57.034215 +0000] T [MSGID: 0] [defaults.c:3010:default_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:57.034333 +0000] T [MSGID: 0] [write-behind.c:2100:wb_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:57.034425 +0000] T [MSGID: 0] [defaults.c:3010:default_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:57.034965 +0000] T [MSGID: 0] [dht-inode-read.c:344:dht_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:57.035090 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 96, rpc hdr: 84
[2023-03-04 12:20:57.035214 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 29, XID: 0x28, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 1) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.035296 +0000] T [MSGID: 0] [dht-inode-read.c:344:dht_stat] 0-stack-trace: stack-address: 0x61c000033898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:57.035431 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 96, rpc hdr: 84
[2023-03-04 12:20:57.035545 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 29, XID: 0x1c, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 1) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.037443 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:57.037587 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:57.037631 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.037693 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:57.037707 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.037758 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:57.037837 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.037888 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.037881 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x28 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 1) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.037973 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x1c Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 1) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.038020 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:399:client4_0_stat_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume-client-0 returned 0
[2023-03-04 12:20:57.038031 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:399:client4_0_stat_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume-client-1 returned 0
[2023-03-04 12:20:57.038274 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:57.038239 +0000] T [MSGID: 0] [dht-inode-read.c:290:dht_attr_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume-dht returned 0
[2023-03-04 12:20:57.038389 +0000] T [MSGID: 0] [defaults.c:1533:default_stat_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume-write-behind returned 0
[2023-03-04 12:20:57.038645 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x276f3) [0x7fffeec9c6f3] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(304f3160-81b1-42f1-9b61-61b86af273ec) time:1677932457 generation=0
[2023-03-04 12:20:57.038711 +0000] T [MSGID: 0] [md-cache.c:806:mdc_inode_xatt_set] 0-md-cache: xatt cache set for (304f3160-81b1-42f1-9b61-61b86af273ec) time:1677932457
[2023-03-04 12:20:57.038788 +0000] T [MSGID: 0] [md-cache.c:1375:mdc_stat_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume-md-cache returned 0
[2023-03-04 12:20:57.038851 +0000] T [MSGID: 0] [defaults.c:1533:default_stat_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume-io-threads returned 0
[2023-03-04 12:20:57.038945 +0000] T [MSGID: 0] [io-stats.c:2129:io_stats_stat_cbk] 0-stack-trace: stack-address: 0x61c000033898, test-volume returned 0
[2023-03-04 12:20:57.039023 +0000] T [fuse-bridge.c:1378:fuse_attr_cbk] 0-glusterfs-fuse: 29: STAT() /testdir => 11196337593315914732
[2023-03-04 12:20:57.039100 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 120/120
[2023-03-04 12:20:57.039284 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:57.039489 +0000] T [fuse-bridge.c:4574:fuse_listxattr_resume] 0-glusterfs-fuse: 18: LISTXATTR /testdir/106721347494936
[2023-03-04 12:20:57.039575 +0000] T [MSGID: 0] [fuse-bridge.c:4578:fuse_listxattr_resume] 0-stack-trace: stack-address: 0x61c000034098, winding from fuse to meta-autoload
[2023-03-04 12:20:57.039656 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from meta-autoload to test-volume
[2023-03-04 12:20:57.039727 +0000] T [MSGID: 0] [io-stats.c:3275:io_stats_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:57.039837 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:20:57.039961 +0000] T [MSGID: 0] [defaults.c:2008:default_getxattr_resume] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:57.040069 +0000] T [MSGID: 0] [md-cache.c:2546:mdc_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:57.040161 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:57.040235 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:57.040342 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:57.040489 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:57.040626 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:57.040830 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 36, rpc hdr: 84
[2023-03-04 12:20:57.041003 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 30, XID: 0x29, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.041093 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000034098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:57.041250 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 36, rpc hdr: 84
[2023-03-04 12:20:57.041388 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 30, XID: 0x1d, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.043845 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:57.044056 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:57.044089 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:57.044110 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.044137 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:57.044202 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.044242 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.044362 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.044368 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x29 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.044477 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x1d Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.044532 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 16
[2023-03-04 12:20:57.044560 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:20:57.044689 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034098, test-volume-client-0 returned 0
[2023-03-04 12:20:57.044689 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034098, test-volume-client-1 returned 0
[2023-03-04 12:20:57.044964 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.045080 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:57.045153 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.045219 +0000] T [MSGID: 0] [dht-common.c:4632:dht_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034098, test-volume-dht returned 0
[2023-03-04 12:20:57.045331 +0000] T [MSGID: 0] [md-cache.c:2495:mdc_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034098, test-volume-md-cache returned 0
[2023-03-04 12:20:57.045439 +0000] T [MSGID: 0] [defaults.c:1371:default_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034098, test-volume-io-threads returned 0
[2023-03-04 12:20:57.045527 +0000] T [MSGID: 0] [io-stats.c:2444:io_stats_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034098, test-volume returned 0
[2023-03-04 12:20:57.045668 +0000] T [fuse-bridge.c:4320:fuse_xattr_cbk] 0-glusterfs-fuse: 30: GETXATTR() /testdir => 0
[2023-03-04 12:20:57.045747 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 24/24
[2023-03-04 12:20:57.045901 +0000] T [fuse-bridge.c:4574:fuse_listxattr_resume] 0-glusterfs-fuse: 20: LISTXATTR /testdir/106721347494936
[2023-03-04 12:20:57.045986 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:57.045977 +0000] T [MSGID: 0] [fuse-bridge.c:4578:fuse_listxattr_resume] 0-stack-trace: stack-address: 0x61c000034898, winding from fuse to meta-autoload
[2023-03-04 12:20:57.046069 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from meta-autoload to test-volume
[2023-03-04 12:20:57.046149 +0000] T [MSGID: 0] [io-stats.c:3275:io_stats_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:57.046300 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:20:57.046457 +0000] T [MSGID: 0] [defaults.c:2008:default_getxattr_resume] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:57.046542 +0000] T [MSGID: 0] [md-cache.c:2546:mdc_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:57.046603 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:57.046646 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:57.046689 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:57.046739 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:57.046805 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:57.046910 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 36, rpc hdr: 84
[2023-03-04 12:20:57.047076 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 31, XID: 0x2a, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.047152 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000034898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:57.047285 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 36, rpc hdr: 84
[2023-03-04 12:20:57.047446 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 31, XID: 0x1e, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.048339 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:57.048421 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:57.048459 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.048493 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:57.048513 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.048552 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:57.048648 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x2a Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.048677 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.048772 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.048781 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 16
[2023-03-04 12:20:57.048871 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x1e Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.048859 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034898, test-volume-client-0 returned 0
[2023-03-04 12:20:57.049081 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:20:57.049119 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.049233 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034898, test-volume-client-1 returned 0
[2023-03-04 12:20:57.049276 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:57.049452 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.049524 +0000] T [MSGID: 0] [dht-common.c:4632:dht_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034898, test-volume-dht returned 0
[2023-03-04 12:20:57.049607 +0000] T [MSGID: 0] [md-cache.c:2495:mdc_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034898, test-volume-md-cache returned 0
[2023-03-04 12:20:57.049687 +0000] T [MSGID: 0] [defaults.c:1371:default_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034898, test-volume-io-threads returned 0
[2023-03-04 12:20:57.049779 +0000] T [MSGID: 0] [io-stats.c:2444:io_stats_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000034898, test-volume returned 0
[2023-03-04 12:20:57.049871 +0000] T [fuse-bridge.c:4320:fuse_xattr_cbk] 0-glusterfs-fuse: 31: GETXATTR() /testdir => 0
[2023-03-04 12:20:57.049961 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 26/26
[2023-03-04 12:20:57.050145 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:57.050429 +0000] T [fuse-bridge.c:4465:fuse_getxattr_resume] 0-glusterfs-fuse: 22: GETXATTR /testdir/106721347494936 (user.attr)
[2023-03-04 12:20:57.050561 +0000] T [MSGID: 0] [fuse-bridge.c:4469:fuse_getxattr_resume] 0-stack-trace: stack-address: 0x61c000035098, winding from fuse to meta-autoload
[2023-03-04 12:20:57.050701 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from meta-autoload to test-volume
[2023-03-04 12:20:57.050787 +0000] T [MSGID: 0] [io-stats.c:3275:io_stats_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:57.050889 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:20:57.051090 +0000] T [MSGID: 0] [defaults.c:2008:default_getxattr_resume] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:57.051278 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:20:57.051424 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:57.051408 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:20:57.051418 +0000] T [MSGID: 0] [md-cache.c:2546:mdc_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:57.051537 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:57.051585 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:57.051630 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:57.051674 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:57.051830 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:57.051931 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 48, rpc hdr: 84
[2023-03-04 12:20:57.052098 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 32, XID: 0x2b, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.052184 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000035098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:57.052327 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 48, rpc hdr: 84
[2023-03-04 12:20:57.052460 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 32, XID: 0x1f, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.053352 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:57.053462 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:57.053474 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:57.053519 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:57.053565 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.053611 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.053664 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.053690 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.053755 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x2b Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.053773 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x1f Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.053880 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 3
[2023-03-04 12:20:57.053900 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:20:57.054015 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035098, test-volume-client-1 returned 0
[2023-03-04 12:20:57.054015 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035098, test-volume-client-0 returned 0
[2023-03-04 12:20:57.054225 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.054356 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:57.054417 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.054478 +0000] T [MSGID: 0] [dht-common.c:4632:dht_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035098, test-volume-dht returned 0
[2023-03-04 12:20:57.054547 +0000] T [MSGID: 0] [md-cache.c:2495:mdc_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035098, test-volume-md-cache returned 0
[2023-03-04 12:20:57.054627 +0000] T [MSGID: 0] [defaults.c:1371:default_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035098, test-volume-io-threads returned 0
[2023-03-04 12:20:57.054693 +0000] T [MSGID: 0] [io-stats.c:2444:io_stats_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035098, test-volume returned 0
[2023-03-04 12:20:57.054757 +0000] T [fuse-bridge.c:4320:fuse_xattr_cbk] 0-glusterfs-fuse: 32: GETXATTR() /testdir => 0
[2023-03-04 12:20:57.054815 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 24/24
[2023-03-04 12:20:57.054981 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:57.055140 +0000] T [fuse-bridge.c:4465:fuse_getxattr_resume] 0-glusterfs-fuse: 24: GETXATTR /testdir/106721347494936 (user.attr)
[2023-03-04 12:20:57.055197 +0000] T [MSGID: 0] [fuse-bridge.c:4469:fuse_getxattr_resume] 0-stack-trace: stack-address: 0x61c000035898, winding from fuse to meta-autoload
[2023-03-04 12:20:57.055266 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from meta-autoload to test-volume
[2023-03-04 12:20:57.055375 +0000] T [MSGID: 0] [io-stats.c:3275:io_stats_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:20:57.055467 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: GETXATTR scheduled as normal priority fop
[2023-03-04 12:20:57.055582 +0000] T [MSGID: 0] [defaults.c:2008:default_getxattr_resume] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:20:57.055674 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:20:57.055759 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:20:57.055741 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:20:57.055753 +0000] T [MSGID: 0] [md-cache.c:2546:mdc_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:20:57.055948 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:20:57.056021 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:20:57.056091 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:20:57.056135 +0000] T [MSGID: 0] [defaults.c:2769:default_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:20:57.056205 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:20:57.056289 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 48, rpc hdr: 84
[2023-03-04 12:20:57.056419 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 33, XID: 0x2c, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.056497 +0000] T [MSGID: 0] [dht-common.c:5163:dht_getxattr] 0-stack-trace: stack-address: 0x61c000035898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:20:57.056583 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 48, rpc hdr: 84
[2023-03-04 12:20:57.056704 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 33, XID: 0x20, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.058358 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:20:57.058363 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:57.058439 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:20:57.058491 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:57.058536 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.058575 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.058629 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:20:57.058654 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:57.058723 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x2c Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-0)
[2023-03-04 12:20:57.058741 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-1: received rpc message (RPC XID: 0x20 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (test-volume-client-1)
[2023-03-04 12:20:57.058795 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 3
[2023-03-04 12:20:57.058820 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:933:client4_0_getxattr_cbk] 0-test-volume-client-1: resetting op_ret to 0 from 3
[2023-03-04 12:20:57.058869 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035898, test-volume-client-0 returned 0
[2023-03-04 12:20:57.058920 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:938:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035898, test-volume-client-1 returned 0
[2023-03-04 12:20:57.059048 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.059171 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:20:57.059240 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0xdc8be) [0x7fffeee628be] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:20:57.059338 +0000] T [MSGID: 0] [dht-common.c:4632:dht_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035898, test-volume-dht returned 0
[2023-03-04 12:20:57.059440 +0000] T [MSGID: 0] [md-cache.c:2495:mdc_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035898, test-volume-md-cache returned 0
[2023-03-04 12:20:57.059538 +0000] T [MSGID: 0] [defaults.c:1371:default_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035898, test-volume-io-threads returned 0
[2023-03-04 12:20:57.059623 +0000] T [MSGID: 0] [io-stats.c:2444:io_stats_getxattr_cbk] 0-stack-trace: stack-address: 0x61c000035898, test-volume returned 0
[2023-03-04 12:20:57.059706 +0000] T [fuse-bridge.c:4320:fuse_xattr_cbk] 0-glusterfs-fuse: 33: GETXATTR() /testdir => 0
[2023-03-04 12:20:57.059788 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 19/19
[2023-03-04 12:20:57.059995 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned 0
[2023-03-04 12:20:57.514142 +0000] D [rpc-clnt-ping.c:308:rpc_clnt_start_ping] 0-test-volume-client-0: returning because transport is connected but there are no frames
[2023-03-04 12:20:57.514535 +0000] D [rpc-clnt-ping.c:308:rpc_clnt_start_ping] 0-test-volume-client-1: returning because transport is connected but there are no frames
[2023-03-04 12:20:58.820135 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:12) in:1, out:0, err:0
[2023-03-04 12:20:58.820239 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-1: Client socket (12) is already connected
[2023-03-04 12:20:58.820283 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-1: ***** reading over non-SSL
[2023-03-04 12:20:58.820369 +0000] D [socket.c:723:__socket_rwv] 0-test-volume-client-1: EOF on socket 12 (errno:0:Success); returning ENODATA
[2023-03-04 12:20:58.820429 +0000] W [socket.c:751:__socket_rwv] 0-test-volume-client-1: readv on 192.168.0.31:51625 failed (No data available)
[2023-03-04 12:20:58.820463 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-1: (sock:12) socket_event_poll_in returned -1
[2023-03-04 12:20:58.820502 +0000] T [socket.c:204:socket_dump_info] 0-test-volume-client-1: $$$ client: disconnecting from (af:2,sock:12) 192.168.0.31 non-SSL (errno:0:Success)
[2023-03-04 12:20:58.820534 +0000] D [socket.c:2817:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL)
[2023-03-04 12:20:58.820584 +0000] D [MSGID: 0] [client.c:2235:client_rpc_notify] 0-test-volume-client-1: got RPC_CLNT_DISCONNECT
[2023-03-04 12:20:58.820668 +0000] I [MSGID: 114018] [client.c:2242:client_rpc_notify] 0-test-volume-client-1: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=test-volume-client-1}]
[2023-03-04 12:20:58.820785 +0000] T [MSGID: 0] [client.c:2309:client_rpc_notify] 0-test-volume-client-1: volfile doesn't have remote-port, resetting to 0
[2023-03-04 12:20:58.821398 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x1e8)[0x7ffff721dd38] (--> /usr/local/lib/libgfrpc.so.0(+0x2daad)[0x7ffff7220aad] ))))) 0-: 192.168.0.31:51625: ping timer event already removed
[2023-03-04 12:21:01.821621 +0000] T [rpc-clnt.c:374:rpc_clnt_reconnect] 0-test-volume-client-1: attempting reconnect
[2023-03-04 12:21:01.821742 +0000] T [socket.c:3228:socket_connect] 0-test-volume-client-1: connecting 0x61a000013898, sock=-1
[2023-03-04 12:21:01.821782 +0000] T [name.c:346:af_inet_client_get_remote_sockaddr] 0-test-volume-client-1: option remote-port missing in volume test-volume-client-1. Defaulting to 24007
[2023-03-04 12:21:01.821808 +0000] T [MSGID: 0] [name.c:234:gf_resolve_ip6] 0-resolver: flushing DNS cache
[2023-03-04 12:21:01.821887 +0000] T [MSGID: 0] [name.c:239:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: 192.168.0.31
[2023-03-04 12:21:01.822117 +0000] D [MSGID: 0] [name.c:277:gf_resolve_ip6] 0-resolver: returning ip-192.168.0.31 (port-24007) for hostname: 192.168.0.31 and port: 24007
[2023-03-04 12:21:01.822182 +0000] D [socket.c:3145:socket_fix_ssl_opts] 0-test-volume-client-1: disabling SSL for portmapper connection
[2023-03-04 12:21:01.822268 +0000] T [socket.c:1023:__socket_nodelay] 0-test-volume-client-1: NODELAY enabled for socket 6
[2023-03-04 12:21:01.822347 +0000] T [socket.c:1109:__socket_keepalive] 0-test-volume-client-1: Keep-alive enabled for socket: 6, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2023-03-04 12:21:01.822509 +0000] T [socket.c:3345:socket_connect] 0-test-volume-client-1: >>> connect() with non-blocking IO for ALL
[2023-03-04 12:21:01.822555 +0000] T [socket.c:204:socket_dump_info] 0-test-volume-client-1: $$$ client: connecting to (af:2,sock:6) 192.168.0.31 non-SSL (errno:0:Success)
[2023-03-04 12:21:01.822966 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:6) in:1, out:4, err:24
[2023-03-04 12:21:01.823020 +0000] T [socket.c:204:socket_dump_info] 0-test-volume-client-1: $$$ client: disconnecting from (af:2,sock:6) 192.168.0.31 non-SSL (errno:111:Connection refused)
[2023-03-04 12:21:01.823048 +0000] D [socket.c:2817:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:6) (non-SSL)
[2023-03-04 12:21:01.823085 +0000] D [MSGID: 0] [client.c:2235:client_rpc_notify] 0-test-volume-client-1: got RPC_CLNT_DISCONNECT
[2023-03-04 12:21:01.823134 +0000] T [MSGID: 0] [client.c:2309:client_rpc_notify] 0-test-volume-client-1: volfile doesn't have remote-port, resetting to 0
[2023-03-04 12:21:01.823731 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x1e8)[0x7ffff721dd38] (--> /usr/local/lib/libgfrpc.so.0(+0x2daad)[0x7ffff7220aad] ))))) 0-: 192.168.0.31:51625: ping timer event already removed
[2023-03-04 12:21:04.823922 +0000] T [rpc-clnt.c:374:rpc_clnt_reconnect] 0-test-volume-client-1: attempting reconnect
[2023-03-04 12:21:04.824045 +0000] T [socket.c:3228:socket_connect] 0-test-volume-client-1: connecting 0x61a000013898, sock=-1
[2023-03-04 12:21:04.824102 +0000] T [name.c:346:af_inet_client_get_remote_sockaddr] 0-test-volume-client-1: option remote-port missing in volume test-volume-client-1. Defaulting to 24007
[2023-03-04 12:21:04.824139 +0000] T [MSGID: 0] [name.c:234:gf_resolve_ip6] 0-resolver: flushing DNS cache
[2023-03-04 12:21:04.824204 +0000] T [MSGID: 0] [name.c:239:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: 192.168.0.31
[2023-03-04 12:21:04.824523 +0000] D [MSGID: 0] [name.c:277:gf_resolve_ip6] 0-resolver: returning ip-192.168.0.31 (port-24007) for hostname: 192.168.0.31 and port: 24007
[2023-03-04 12:21:04.824607 +0000] D [socket.c:3145:socket_fix_ssl_opts] 0-test-volume-client-1: disabling SSL for portmapper connection
[2023-03-04 12:21:04.824742 +0000] T [socket.c:1023:__socket_nodelay] 0-test-volume-client-1: NODELAY enabled for socket 6
[2023-03-04 12:21:04.824784 +0000] T [socket.c:1109:__socket_keepalive] 0-test-volume-client-1: Keep-alive enabled for socket: 6, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2023-03-04 12:21:04.824944 +0000] T [socket.c:3345:socket_connect] 0-test-volume-client-1: >>> connect() with non-blocking IO for ALL
[2023-03-04 12:21:04.824992 +0000] T [socket.c:204:socket_dump_info] 0-test-volume-client-1: $$$ client: connecting to (af:2,sock:6) 192.168.0.31 non-SSL (errno:0:Success)
[2023-03-04 12:21:04.825467 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-1: client (sock:6) in:1, out:4, err:24
[2023-03-04 12:21:04.825539 +0000] T [socket.c:204:socket_dump_info] 0-test-volume-client-1: $$$ client: disconnecting from (af:2,sock:6) 192.168.0.31 non-SSL (errno:111:Connection refused)
[2023-03-04 12:21:04.825574 +0000] D [socket.c:2817:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:6) (non-SSL)
[2023-03-04 12:21:04.825625 +0000] D [MSGID: 0] [client.c:2235:client_rpc_notify] 0-test-volume-client-1: got RPC_CLNT_DISCONNECT
[2023-03-04 12:21:04.825704 +0000] T [MSGID: 0] [client.c:2309:client_rpc_notify] 0-test-volume-client-1: volfile doesn't have remote-port, resetting to 0
[2023-03-04 12:21:04.826336 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x1e8)[0x7ffff721dd38] (--> /usr/local/lib/libgfrpc.so.0(+0x2daad)[0x7ffff7220aad] ))))) 0-: 192.168.0.31:51625: ping timer event already removed
[2023-03-04 12:21:04.916426 +0000] T [MSGID: 0] [fuse-bridge.c:1529:fuse_getattr] 0-stack-trace: stack-address: 0x61c000036098, winding from fuse to meta-autoload
[2023-03-04 12:21:04.916581 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from meta-autoload to test-volume
[2023-03-04 12:21:04.916683 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:21:04.916783 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:21:04.916890 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:21:04.917006 +0000] T [MSGID: 0] [md-cache.c:628:mdc_inode_iatt_get] 0-md-cache: iatt cache not valid for (00000000-0000-0000-0000-000000000001)
[2023-03-04 12:21:04.917072 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:21:04.917141 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:21:04.917230 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:21:04.917353 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:21:04.917423 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:21:04.917568 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:21:04.917725 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 540, rpc hdr: 84
[2023-03-04 12:21:04.917871 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 34, XID: 0x2d, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.918398 +0000] D [rpc-clnt-ping.c:90:rpc_clnt_remove_ping_timer_locked] (--> /usr/local/lib/libglusterfs.so.0(gf_backtrace_save+0x29)[0x7ffff730b069] (--> /usr/local/lib/libglusterfs.so.0(_gf_log_callingfn+0x41a)[0x7ffff72f280a] (--> /usr/local/lib/libgfrpc.so.0(+0x382ec)[0x7ffff722b2ec] (--> /usr/local/lib/libgfrpc.so.0(+0x3966d)[0x7ffff722c66d] (--> /usr/local/lib/libgfrpc.so.0(+0x39b0d)[0x7ffff722cb0d] ))))) 0-: 192.168.0.30:57763: ping timer event already removed
[2023-03-04 12:21:04.918520 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 0, rpc hdr: 80
[2023-03-04 12:21:04.918633 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 35, XID: 0x2e, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.918696 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000036098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:21:04.918857 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
[2023-03-04 12:21:04.919021 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test-volume-client-1: remote operation failed. [{path=/}, {gfid=00000000-0000-0000-0000-000000000001}, {errno=107}, {error=Transport endpoint is not connected}]
[2023-03-04 12:21:04.919113 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:21:04.919113 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:21:04.919170 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:21:04.919234 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.919290 +0000] D [dict.c:1393:dict_get_with_ref] (-->/usr/local/lib/libglusterfs.so.0(dict_get_uint32+0xe5) [0x7ffff72d6355] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0xb9) [0x7ffff72d4569] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL: trusted.glusterfs.dht.commithash [Invalid argument]
[2023-03-04 12:21:04.919319 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.919445 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-1 returned op_ret -1 [Transport endpoint is not connected]
[2023-03-04 12:21:04.919496 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x2e Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.919509 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:21:04.919545 +0000] D [rpc-clnt-ping.c:188:rpc_clnt_ping_cbk] 0-test-volume-client-0: Ping latency is 1ms
[2023-03-04 12:21:04.919584 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:21:04.919583 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test-volume-client-1: failed to send the fop []
[2023-03-04 12:21:04.919680 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:21:04.919666 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.919757 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.919843 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x2d Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.919934 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-client-0 returned 0
[2023-03-04 12:21:04.919985 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:21:04.920030 +0000] D [MSGID: 0] [dht-common.c:1796:dht_revalidate_cbk] 0-test-volume-dht: /: internal xattr trusted.glusterfs.dht.mds is not present on subvol test-volume-client-0(gfid is 00000000-0000-0000-0000-000000000001)
[2023-03-04 12:21:04.920087 +0000] T [MSGID: 0] [dht-common.c:1975:dht_revalidate_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-dht returned 0
[2023-03-04 12:21:04.920129 +0000] T [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-utime returned 0
[2023-03-04 12:21:04.920182 +0000] T [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-write-behind returned 0
[2023-03-04 12:21:04.920234 +0000] T [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-quick-read returned 0
[2023-03-04 12:21:04.920384 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26746) [0x7fffeec9b746] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(00000000-0000-0000-0000-000000000001) time:1677932464 generation=0
[2023-03-04 12:21:04.920444 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:21:04.920500 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.920488 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:21:04.920496 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:21:04.920629 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.920586 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:21:04.920593 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:21:04.920794 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.920783 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:21:04.920790 +0000] T [MSGID: 0] [md-cache.c:806:mdc_inode_xatt_set] 0-md-cache: xatt cache set for (00000000-0000-0000-0000-000000000001) time:1677932464
[2023-03-04 12:21:04.920879 +0000] T [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-md-cache returned 0
[2023-03-04 12:21:04.920928 +0000] T [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume-io-threads returned 0
[2023-03-04 12:21:04.920994 +0000] T [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036098, test-volume returned 0
[2023-03-04 12:21:04.921063 +0000] T [fuse-bridge.c:1378:fuse_attr_cbk] 0-glusterfs-fuse: 34: LOOKUP() / => 1
[2023-03-04 12:21:04.921106 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 120/120
[2023-03-04 12:21:04.921250 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:21:04.921380 +0000] T [fuse-bridge.c:1185:fuse_lookup_resume] 0-glusterfs-fuse: 28: LOOKUP /testdir(304f3160-81b1-42f1-9b61-61b86af273ec)
[2023-03-04 12:21:04.921434 +0000] T [MSGID: 0] [fuse-bridge.c:1223:fuse_lookup_resume] 0-stack-trace: stack-address: 0x61c000036898, winding from fuse to meta-autoload
[2023-03-04 12:21:04.921482 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from meta-autoload to test-volume
[2023-03-04 12:21:04.921555 +0000] T [MSGID: 0] [io-stats.c:2768:io_stats_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume to test-volume-io-threads
[2023-03-04 12:21:04.921631 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: LOOKUP scheduled as fast priority fop
[2023-03-04 12:21:04.921713 +0000] T [MSGID: 0] [defaults.c:2215:default_lookup_resume] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:21:04.921817 +0000] T [MSGID: 0] [md-cache.c:628:mdc_inode_iatt_get] 0-md-cache: iatt cache not valid for (304f3160-81b1-42f1-9b61-61b86af273ec)
[2023-03-04 12:21:04.921949 +0000] T [MSGID: 0] [md-cache.c:1337:mdc_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:21:04.922011 +0000] T [MSGID: 0] [quick-read.c:647:qr_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:21:04.922062 +0000] T [MSGID: 0] [defaults.c:3024:default_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:21:04.922107 +0000] T [MSGID: 0] [write-behind.c:2413:wb_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:21:04.922157 +0000] T [MSGID: 0] [utime.c:260:gf_utime_lookup] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:21:04.922219 +0000] T [MSGID: 0] [dht-hashfn.c:95:dht_hash_compute] 0-test-volume-dht: trying regex for testdir
[2023-03-04 12:21:04.922354 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:21:04.922528 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 460, rpc hdr: 84
[2023-03-04 12:21:04.922628 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 36, XID: 0x2f, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.922672 +0000] T [MSGID: 0] [dht-common.c:3453:dht_do_revalidate] 0-stack-trace: stack-address: 0x61c000036898, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:21:04.922781 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
[2023-03-04 12:21:04.922876 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk] 0-test-volume-client-1: remote operation failed. [{path=/testdir}, {gfid=304f3160-81b1-42f1-9b61-61b86af273ec}, {errno=107}, {error=Transport endpoint is not connected}]
[2023-03-04 12:21:04.922920 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:21:04.923017 +0000] D [dict.c:1393:dict_get_with_ref] (-->/usr/local/lib/libglusterfs.so.0(dict_get_uint32+0xe5) [0x7ffff72d6355] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0xb9) [0x7ffff72d4569] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL: trusted.glusterfs.dht.commithash [Invalid argument]
[2023-03-04 12:21:04.923077 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /testdir: revalidate lookup on test-volume-client-1 returned op_ret -1 [Transport endpoint is not connected]
[2023-03-04 12:21:04.923145 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:2991:client4_0_lookup] 0-test-volume-client-1: failed to send the fop []
[2023-03-04 12:21:04.924400 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:21:04.924473 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:21:04.924511 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.924565 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.924723 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x2f Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.924842 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:2570:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-client-0 returned 0
[2023-03-04 12:21:04.924922 +0000] D [MSGID: 0] [dht-common.c:1680:dht_revalidate_cbk] 0-test-volume-dht: /testdir: revalidate lookup on test-volume-client-0 returned op_ret 0
[2023-03-04 12:21:04.925049 +0000] T [MSGID: 0] [dht-common.c:1975:dht_revalidate_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-dht returned 0
[2023-03-04 12:21:04.925117 +0000] T [MSGID: 0] [utime.c:218:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-utime returned 0
[2023-03-04 12:21:04.925184 +0000] T [MSGID: 0] [write-behind.c:2371:wb_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-write-behind returned 0
[2023-03-04 12:21:04.925279 +0000] T [MSGID: 0] [quick-read.c:606:qr_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-quick-read returned 0
[2023-03-04 12:21:04.925455 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x266e0) [0x7fffeec9b6e0] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(00000000-0000-0000-0000-000000000001) time:1677932464 generation=0
[2023-03-04 12:21:04.925549 +0000] T [md-cache.c:583:mdc_inode_iatt_set_validate] (-->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26746) [0x7fffeec9b746] -->/usr/local/lib/glusterfs/11dev/xlator/performance/md-cache.so(+0x26105) [0x7fffeec9b105] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-md-cache: Updated iatt(304f3160-81b1-42f1-9b61-61b86af273ec) time:1677932464 generation=0
[2023-03-04 12:21:04.925611 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:21:04.925687 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.925667 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements
[2023-03-04 12:21:04.925675 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:21:04.925801 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.925790 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.open-fd-count doesn't satisfy caching requirements
[2023-03-04 12:21:04.925797 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:21:04.925935 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.925922 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.dht doesn't satisfy caching requirements
[2023-03-04 12:21:04.925930 +0000] T [MSGID: 0] [md-cache.c:806:mdc_inode_xatt_set] 0-md-cache: xatt cache set for (304f3160-81b1-42f1-9b61-61b86af273ec) time:1677932464
[2023-03-04 12:21:04.926025 +0000] T [MSGID: 0] [md-cache.c:1269:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-md-cache returned 0
[2023-03-04 12:21:04.926072 +0000] T [MSGID: 0] [defaults.c:1546:default_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume-io-threads returned 0
[2023-03-04 12:21:04.926124 +0000] T [MSGID: 0] [io-stats.c:2297:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x61c000036898, test-volume returned 0
[2023-03-04 12:21:04.926181 +0000] T [fuse-bridge.c:1034:fuse_entry_cbk] 0-glusterfs-fuse: 36: LOOKUP() /testdir => 11196337593315914732
[2023-03-04 12:21:04.926262 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 144/144
[2023-03-04 12:21:04.926456 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:21:04.926486 +0000] T [fuse-bridge.c:4634:fuse_removexattr_resume] 0-glusterfs-fuse: 30: REMOVEXATTR /testdir/106721347494936 (user.attr)
[2023-03-04 12:21:04.926583 +0000] T [MSGID: 0] [fuse-bridge.c:4639:fuse_removexattr_resume] 0-stack-trace: stack-address: 0x61c000037098, winding from fuse to meta-autoload
[2023-03-04 12:21:04.926681 +0000] T [MSGID: 0] [defaults.c:2816:default_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from meta-autoload to test-volume
[2023-03-04 12:21:04.926767 +0000] T [MSGID: 0] [io-stats.c:3286:io_stats_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume to test-volume-io-threads
[2023-03-04 12:21:04.926899 +0000] D [MSGID: 0] [io-threads.c:370:iot_schedule] 0-test-volume-io-threads: REMOVEXATTR scheduled as normal priority fop
[2023-03-04 12:21:04.927032 +0000] T [MSGID: 0] [defaults.c:2046:default_removexattr_resume] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-io-threads to test-volume-md-cache
[2023-03-04 12:21:04.927173 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:21:04.927280 +0000] D [logging.c:1866:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk
[2023-03-04 12:21:04.927261 +0000] T [MSGID: 0] [md-cache.c:701:is_mdc_key_satisfied] 0-md-cache: xattr key user.attr doesn't satisfy caching requirements
[2023-03-04 12:21:04.927273 +0000] T [MSGID: 0] [md-cache.c:2738:mdc_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-md-cache to test-volume-quick-read
[2023-03-04 12:21:04.927480 +0000] T [MSGID: 0] [defaults.c:2816:default_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-quick-read to test-volume-open-behind
[2023-03-04 12:21:04.927555 +0000] T [MSGID: 0] [defaults.c:2816:default_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-open-behind to test-volume-write-behind
[2023-03-04 12:21:04.927623 +0000] T [MSGID: 0] [defaults.c:2816:default_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-write-behind to test-volume-utime
[2023-03-04 12:21:04.927732 +0000] T [MSGID: 0] [utime-autogen-fops.c:428:gf_utime_removexattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-utime to test-volume-dht
[2023-03-04 12:21:04.927992 +0000] D [dict.c:650:dict_ref] (-->/usr/local/lib/glusterfs/11dev/xlator/cluster/distribute.so(+0x11e556) [0x7fffeeea4556] -->/usr/local/lib/libglusterfs.so.0(dict_ref+0x95) [0x7ffff72ce375] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL [Invalid argument]
[2023-03-04 12:21:04.928111 +0000] T [MSGID: 0] [dht-common.c:5587:dht_dir_common_set_remove_xattr] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:21:04.928205 +0000] D [MSGID: 101015] [glusterfs3.h:785:dict_to_xdr] 0-dict: key 'trusted.glusterfs.dht.mds' would not be sent on wire in the future [Invalid argument]
[2023-03-04 12:21:04.928342 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 88, rpc hdr: 84
[2023-03-04 12:21:04.928500 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 37, XID: 0x30, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 33) to rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.929768 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:21:04.929821 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:21:04.929846 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.929885 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.929957 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x30 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 33) from rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.930009 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1525:client4_0_xattrop_cbk] 0-test-volume-client-0: resetting op_ret to 0 from 0
[2023-03-04 12:21:04.930082 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1530:client4_0_xattrop_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-client-0 returned 0
[2023-03-04 12:21:04.930153 +0000] T [MSGID: 0] [dht-common.c:4052:dht_xattrop_mds_cbk] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-dht to test-volume-client-0
[2023-03-04 12:21:04.930262 +0000] T [rpc-clnt.c:1390:rpc_clnt_record_build_header] 0-rpc-clnt: Request payload: 44, rpc hdr: 84
[2023-03-04 12:21:04.930368 +0000] T [rpc-clnt.c:1655:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 37, XID: 0x31, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 19) to rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.930467 +0000] T [socket.c:2791:socket_event_handler] 0-test-volume-client-0: (sock:14) socket_event_poll_in returned 0
[2023-03-04 12:21:04.931481 +0000] T [socket.c:2748:socket_event_handler] 0-test-volume-client-0: client (sock:14) in:1, out:0, err:0
[2023-03-04 12:21:04.931542 +0000] T [socket.c:2774:socket_event_handler] 0-test-volume-client-0: Client socket (14) is already connected
[2023-03-04 12:21:04.931626 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.931664 +0000] T [socket.c:554:__socket_ssl_readv] 0-test-volume-client-0: ***** reading over non-SSL
[2023-03-04 12:21:04.931720 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x31 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 19) from rpc-transport (test-volume-client-0)
[2023-03-04 12:21:04.931782 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1061:client4_0_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-client-0 returned 0
[2023-03-04 12:21:04.931839 +0000] T [MSGID: 0] [dht-common.c:3977:dht_setxattr_mds_cbk] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-dht to test-volume-client-1
[2023-03-04 12:21:04.931950 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
[2023-03-04 12:21:04.932019 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:1057:client4_0_removexattr_cbk] 0-test-volume-client-1: remote operation failed. [{errno=107}, {error=Transport endpoint is not connected}]
[2023-03-04 12:21:04.932069 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:1061:client4_0_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:21:04.932117 +0000] D [MSGID: 0] [dht-common.c:3843:dht_setxattr_non_mds_cbk] 0-test-volume-dht: subvolume test-volume-client-1 returned -1 [Transport endpoint is not connected]
[2023-03-04 12:21:04.932182 +0000] T [MSGID: 0] [dht-common.c:3892:dht_setxattr_non_mds_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-dht returned 0
[2023-03-04 12:21:04.932224 +0000] T [MSGID: 0] [utime-autogen-fops.c:414:gf_utime_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-utime returned 0
[2023-03-04 12:21:04.932347 +0000] D [dict.c:1393:dict_get_with_ref] (-->/usr/local/lib/libglusterfs.so.0(dict_get_iatt+0xe7) [0x7ffff72d9b17] -->/usr/local/lib/libglusterfs.so.0(dict_get_with_ref+0xb9) [0x7ffff72d4569] -->/usr/local/lib/libglusterfs.so.0(_gf_msg+0x5e5) [0x7ffff72f0275] ) 0-dict: dict is NULL: virt-gf-prestat [Invalid argument]
[2023-03-04 12:21:04.932398 +0000] T [MSGID: 0] [md-cache.c:2682:mdc_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-md-cache returned 0
[2023-03-04 12:21:04.932445 +0000] T [MSGID: 0] [defaults.c:1403:default_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-io-threads returned 0
[2023-03-04 12:21:04.932501 +0000] T [MSGID: 0] [io-stats.c:2453:io_stats_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume returned 0
[2023-03-04 12:21:04.932549 +0000] T [fuse-bridge.c:1953:fuse_removexattr_cbk] 0-glusterfs-fuse: 37: REMOVEXATTR() /testdir => 0
[2023-03-04 12:21:04.932602 +0000] T [fuse-bridge.c:374:send_fuse_iov] 0-glusterfs-fuse: writev() result 16/16
[2023-03-04 12:21:04.932780 +0000] W [MSGID: 114029] [client-rpc-fops_v2.c:4485:client4_0_removexattr] 0-test-volume-client-1: failed to send the fop []
Thanks for sharing the trace logs. Yes, I have configured the asan_path. I will try to reproduce on two server, i tried on 1 single machine, i was not able to reproduce it.
In the case of xattr operation on directory, DHT first triggers a call on MDS and thereafter it winds a fop on non-MDS. If the operation is successful on MDS dht returns the true to the upper xlator.
It is strange as per trace logs the client is getting successful responses from mds subvol(test-volume-client-0) and getting an error for non_mds_cbk(test-volume-client-1) because non_mds subvolume is down so it is throwing an error Transport endpoint is not connected.
[2023-03-04 12:21:04.931720 +0000] T [rpc-clnt.c:716:rpc_clnt_handle_reply] 0-test-volume-client-0: received rpc message (RPC XID: 0x31 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 19) from rpc-transport (test-volume-client-0) [2023-03-04 12:21:04.931782 +0000] T [MSGID: 0] [client-rpc-fops_v2.c:1061:client4_0_removexattr_cbk] 0-stack-trace: stack-address: 0x61c000037098, test-volume-client-0 returned 0 [2023-03-04 12:21:04.931839 +0000] T [MSGID: 0] [dht-common.c:3977:dht_setxattr_mds_cbk] 0-stack-trace: stack-address: 0x61c000037098, winding from test-volume-dht to test-volume-client-1 [2023-03-04 12:21:04.931950 +0000] D [MSGID: 0] [client.c:203:client_submit_request] 0-test-volume-client-1: connection in disconnected state
As we can see trace logs are throwing successful wind operation msg for dht_setxattr_mds_cbk thereafter there is no code hit by the mds_cbk function so it means if the process was crashed the process should not log a message and as we can see even fuse also throwing REMOVEXATTR was successful because the operation was executed on MDS so it is strange for me how is it possible fuse is getting true even though a process is crashed.
Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.
CVE-2022-48340 is related to this issue, right?
Yes.
@lvtao-sec I have tried multiple times in my environment but somehow i am not able to reproduce an issue. After code reading it seems i have found the reason why the client is facing an issue. Would it be possible for you test a patch in your environment? I am also getting similar errors but asan is not getting any crashdump in my environment, I think crash depends on the environment, we can;t expect a process would crash everytime in case of use-after-free.
@lvtao-sec After trying on some other environment i am able to reproduce an issue, i will upload the patch.
Hi, @lvtao-sec, how can I trigger such a detection? I have set --enable-asan
and compile it. But I get nothing when I run some gluster cmds, even though I set export ASAN_OPTIONS="log_path=/tmp/asan-log"
. When will the log will be generated and written? And do I need some extra settings?
Hi,
You can follow the steps here: https://github.com/gluster/glusterfs/issues/3732#issue-1346559275
I think --enable-asan
and export ASAN_OPTIONS="log_path=/tmp/asan-log"
are enough. When the bug is triggered after executing the reproduction commands, the crash backtrace will be recorded in /tmp/asan-log
.
This bug is delicate. Sometimes it doesn't show up. Maybe try to reproduce multiple times to see if you can succeed once.
Hi,
You can follow the steps here: #3732 (comment)
I think
--enable-asan
andexport ASAN_OPTIONS="log_path=/tmp/asan-log"
are enough. When the bug is triggered after executing the reproduction commands, the crash backtrace will be recorded in/tmp/asan-log
.This bug is delicate. Sometimes it doesn't show up. Maybe try to reproduce multiple times to see if you can succeed once.
OK, thanks for you rapid reply. I'll have a try :)