libelektra
libelektra copied to clipboard
optimize syscalls
Another area of optimization is to avoid unnecessary system calls, e.g. a kdbGet (without any changes) does:
access("/root/.cache/elektra/backend/", W_OK) = 0
open("/root/.cache/elektra/backend//cache_cascading.mmap", O_RDONLY|O_LARGEFILE) = 5
fstat64(5, {st_mode=S_IFREG|0600, st_size=10512, ...}) = 0
mmap2(NULL, 10512, PROT_READ|PROT_WRITE, MAP_PRIVATE, 5, 0) = 0xb6c8d000
close(5) = 0
stat64("/usr/share/elektra/specification/default.ecf", 0xbe8bcba0) = -1 ENOENT (No such file or directory)
stat64("/tmp/etc/elektra-atm.ini", 0xbe8bcba0) = -1 ENOENT (No such file or directory)
stat64("/root/etc/elektra-atm.ini", 0xbe8bcba0) = -1 ENOENT (No such file or directory)
stat64("/etc/elektra-atm.ini", {st_mode=S_IFREG|0600, st_size=411, ...}) = 0
The stat64 obviously are needed but why do we recreate the mmap if actually nothing has changed? Shouldn't kdbGet be a NOP then?
There might be further unnecessary create, copy, rename, write, unlink calls to be investigated.
@mpranj are you interested in this kind of work?
The goal would be that the cache can also live on flash systems where write access is costly. (And no workaround to have the cache in tmpfs would be needed, which doesn't survive reboots.)
The stat64 obviously are needed but why do we recreate the mmap if actually nothing has changed? Shouldn't kdbGet be a NOP then?
I simply overlooked this case to be honest. Thank you for investigating this.
The cache is not recreated though, it is just re-opened. I fully agree this needs to be optimized away and I opened #3944 to track this specific problem.
And no workaround to have the cache in tmpfs would be needed
The tmpfs "workaround" is not needed because of the cache. It is needed due to slow I/O inside docker containers on specific host filesystems. Since we don't need to persist testing data in the first place, I put all obvious write heavy directories into tmpfs. The main reason to add this workaround was testscr_check_kdb_internal_suite (as noted in #3512).
I don't like to calls this a workaround. The tmpfs should probably have been there in the first place and I think it is here to stay. We do not need to exhaust the write cycles of the SSDs with temporary data which is never needed again after the tests finish.
are you interested in this kind of work?
Yes, this sounds very much like my cup of tea.
Thank you so much for taking over this topic! :sparkling_heart:
We do not need to exhaust the write cycles of the SSDs with temporary data which is never needed again after the tests finish.
I need to clarify: @haraldg now also wants to use this workaround for #3909. For tests it is okay to use tmpfs.
I'm not quite sure, what we are talking about here.
The unnecessary open() etc calls don't bother me much. I only pointed them out because I happened upon them when checking with strace (for generally sane behaviour).
What is more of an issue: I also noticed that "/root/.cache/elektra/backend//cache_cascading.mmap" is written (according to filesystem timestamps) once per starting the application, but not for each kdbGet(). I didn't investigate further, but as a workaround I have set XDG_CACHE_HOME=/tmp/ globally in the 0.9.7 OpenWRT package. That seems to work well enough.
I mark this issue stale as it did not have any activity for one year. I'll close it in two weeks if no further activity occurs. If you want it to be alive again, ping the issue by writing a message here or create a new issue with the remainder of this issue. Thank you for your contributions :sparkling_heart:
This is actually a resolver topic, I added it to #4423.
I mark this stale as it did not have any activity for one year. I'll close it in two weeks if no further activity occurs. If you want it to be alive again, ping by writing a message here or create a new issue with the remainder of this issue. Thank you for your contributions :sparkling_heart:
I closed this now because it has been inactive for more than one year. If I closed it by mistake, please do not hesitate to reopen it or create a new issue with the remainder of this issue. Thank you for your contributions :sparkling_heart: