mohit84
mohit84
> > Do index insert/delete operation in an internal data structure(hash) and save the gfids in a file while index refcount is 1 and 0. During the brick start, the...
> @mohit84 I did not understand the XFS performance penalty thing. Are you saying that XFS keeps switching between storing the dentries in the inode and storing them in a...
Below is the perf result after execute a small test case on a single node (1x3) echo 3 > /proc/sys/vm/drop_caches;/root/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 64 --files 10000 --top /mnt/test;echo...
> > > > Do index insert/delete operation in an internal data structure(hash) and save the gfids in a file while index refcount is 1 and 0. During the brick...
> As per my understanding, if we have brick multiplexing with 64 bricks per machine(Which is what I remember to be the case in the worst case scenario) this method...
Would it be possible for you try to mount a volume without global-thread, please try io-threads instead of using global-thread for client. You can enable io-threads only for client and...
> @mohit84 we're not using global threads on the clients, only the bricks: > > ``` > Options Reconfigured: > cluster.locking-scheme: granular > performance.open-behind: off performance.iot-pass-through: true config.global-threading: on config.brick-threads:...
> > Actually, I missed reading stack backtrace, the thread(gf_io_thread_main) was spawned while io_uring is enabled not global-thread, Can you please try after disabling io_uring? > > You referring to...
> Thought more on this, and I guess, instead of another volfile-server as xlator, having server-protocol itself properly serving `server_getspec()` would solve the issue of a gluster protocol's getspec (to...
Thanks Xavi for solving the doubts, I think you can go with this approach.