Eliminate iobuf_pool redundancy
We have a memory-pool implementation with some pretty good properties w.r.t. performance, memory consumption, locality of reference, etc. Iobuf_pool purports to serve an extremely similar need, with few (if any) unique requirements. The implementation is very sub-optimal in terms of large static vs. dynamic allocations, refcounting via locks vs. atomics, and so on. We should explore opportunities to unify these components, or at least modernize the iobuf_pool implementation.
Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.
Can we just get rid of memory pools? @amarts - thoughts?
I am in favor! I believe basic system allocation (malloc/tcmalloc) are gotten better in last 12yrs!
At that time memory pool helped us to reduce context switches, and we see that as not a major bottlenecks now! A good set of perf tests, and complete removal of memory pool is not a bad idea!
Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.
Noticed that the previous discussion didn't remove the label, which made the bot close this issue! reopening!
It's not very difficult to get rid of memory pools slowly but surely, it's just that it's literally everywhere. Here's a reasonable start:
#if defined(GF_DISABLE_MEMPOOL)
#define MEM_GET(mem_pool, size) GF_MALLOC(size, gf_common_mt_mem_pool);
#else
#define MEM_GET(mem_pool, size) mem_get(mem_pool);
#endif
#if defined(GF_DISABLE_MEMPOOL)
#define MEM_GET0(mem_pool, size) GF_CALLOC(1, size, gf_common_mt_mem_pool);
#else
#define MEM_GET0(mem_pool, size) mem_get0(mem_pool);
#endif
(and then change calls to mem_get() and mem_get0() to those macros above).
Side benefit (apart from getting rid of the use of memory pools) can be seen in this snippet from STAC_WIND_COMMON marco:
_new = MEM_GET0(frame->root->pool->frame_mem_pool, sizeof(call_frame_t));
We do not need to fetch frame->root->pool->frame_mem_pool - so many memory fetches.
Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.
Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.
Now that we've made memory pools use regular memory if memory pools mechanism is disabled, shall we consider disabling it by default, for Gluster 10? @mohit84 ?
Now that we've made memory pools use regular memory if memory pools mechanism is disabled, shall we consider disabling it by default, for Gluster 10? @mohit84 ?
Yes we are planning to disable MEMPOOL for Gluster 10 release.
Now that we've made memory pools use regular memory if memory pools mechanism is disabled, shall we consider disabling it by default, for Gluster 10? @mohit84 ?
Hi,
I have executed below script to test mempool enable/disable performance for latest master branch on single physical node. I have executed test case on nvme disks(2x3 volume topology) with default configuration along with configured 4 event(client|server) threads.
for((i=0;i<=4;i++)) do echo "#############Iteration $i ###########" echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation create --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation ls-l --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation chmod --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation stat --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation read --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation append --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation mkdir --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation rmdir --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation cleanup --threads 16 --file-size 64 --files 100000 --top /mnt/test
done
I have found below results.
Fop (default mempool enable) mempool_disable
create: 1554.52 1435.25
ls-l: 55225.4 50093.7
chmod: 3972.53 3412.18
stat: 6290.55 5234.12
read: 4078.17 3602.56
append: 1863.99 1761.32
mkdir: 1522.75 1443.1
rmdir: 1476.02 1378.67
cleanup: 1179.67 1017.03
Memory consumption
glusterd(avg): 12165K 9991K
brick1(avg): 144579K 125596K
brick2(avg): 145044K 132623K
brick3(avg): 138785K 129467K
brick4(avg): 134308K 127238K
brick5(avg): 139554K 130883K
brick6(avg): 137484K 132372K
glustershd: 35510K 31074K
client(fuse): 299562K 283260K
I don't think we can disable glusterfs mempool, we are getting almost 10% performance reduction in case of using glibc pool.We are getting some improvement in memory consumption but at the same time performance degradation is more so i believe it is not a good idea to disable glusterfs memory pool.
Thanks, Mohit Agrawal
Now that we've made memory pools use regular memory if memory pools mechanism is disabled, shall we consider disabling it by default, for Gluster 10? @mohit84 ?
Hi,
I have executed below script to test mempool enable/disable performance for latest master branch on single physical node. I have executed test case on nvme disks(2x3 volume topology) with default configuration along with configured 4 event(client|server) threads.
for((i=0;i<=4;i++)) do echo "#############Iteration $i ###########" echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation create --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation ls-l --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation chmod --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation stat --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation read --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation append --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation mkdir --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation rmdir --threads 16 --file-size 64 --files 100000 --top /mnt/test echo 3 > /proc/sys/vm/drop_caches;time ./smallfile_cli.py --operation cleanup --threads 16 --file-size 64 --files 100000 --top /mnt/test
done
I have found below results.
Fop (default mempool enable) mempool_disable create: 1554.52 1435.25 ls-l: 55225.4 50093.7 chmod: 3972.53 3412.18 stat: 6290.55 5234.12 read: 4078.17 3602.56 append: 1863.99 1761.32 mkdir: 1522.75 1443.1 rmdir: 1476.02 1378.67 cleanup: 1179.67 1017.03
Memory consumption glusterd(avg): 12165K 9991K brick1(avg): 144579K 125596K brick2(avg): 145044K 132623K brick3(avg): 138785K 129467K brick4(avg): 134308K 127238K brick5(avg): 139554K 130883K brick6(avg): 137484K 132372K glustershd: 35510K 31074K client(fuse): 299562K 283260K
I don't think we can disable glusterfs mempool, we are getting almost 10% performance reduction in case of using glibc pool.We are getting some improvement in memory consumption but at the same time performance degradation is more so i believe it is not a good idea to disable glusterfs memory pool.
Thanks, Mohit Agrawal
Thanks for providing the numbers. Yes, I agree that we shouldn't disable if there is perf degradation.
Thanks for the great work @mohit84 . If you still have the setup, can you just re-run this with tcmalloc? I agree we don't want to disable if we see such significant performance drop.
Thanks for the great work @mohit84 . If you still have the setup, can you just re-run this with tcmalloc? I agree we don't want to disable if we see such significant performance drop.
Hi Yaniv,
Thanks for reminding to test tcmallc.On single node i am getting good improvement tcmalloc(10 % improvement) in performance as compare to glusterfs pool and more than 15% improvement with glibc pool so i believe we can enable tcmalloc and disable mempool.
Fop (default mempool enable)====>mempool_disable=====>tcmalloc
create: 1554.52 =====> 1435.25 ======> 1708.72
ls-l: 55225.4 =====> 50093.7 ======> 59384.2
chmod: 3972.53 =====> 3412.18 ======> 4462.78
stat: 6290.55 =====> 5234.12 ======> 7406.67
read: 4078.17 =====> 3602.56 ======> 4777.25
append: 1863.99 ======> 1761.32 ======> 1989.82
mkdir: 1522.75 ======> 1443.1 ======> 1610.38
rmdir: 1476.02 ======> 1378.67 ======> 1565.25
cleanup: 1179.67 ======> 1017.03 ======> 1388.02
Memory consumption
glusterd(avg): 12165K 9991K 16258K
brick1(avg): 144579K 125596K 115282K
brick2(avg): 145044K 132623K 118767k
brick3(avg): 138785K 129467K 120089K
brick4(avg): 134308K 127238K 117336K
brick5(avg): 139554K 130883K 116323K
brick6(avg): 137484K 132372K 111835K
glustershd: 35510K 31074K 27591.3K
client(fuse): 299562K 283260K 401101K
We will try to run same test case on multinode system on nvme/ssd and will share the number soon.
Just to make sure, there are two comparison that can be make - unsure what's I'm seeing: tcmalloc with mempool tcmalloc with mempool disabled.
Which one am I seeing above? The latter? That's great!
Just to make sure, there are two comparison that can be make - unsure what's I'm seeing: tcmalloc with mempool tcmalloc with mempool disabled.
Which one am I seeing above? The latter? That's great!
Yes, tcmalloc with mempool disabled.
Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.
I thought we wanted in release 11 to remove mempool code completely?
Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.
Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.