serenity icon indicating copy to clipboard operation
serenity copied to clipboard

Kernel: Try to coalece some IO in Ext2FS

Open Hendiadyoin1 opened this issue 2 years ago • 5 comments
trafficstars

This tries to lessen the IO strain, potentially increasing data throughput This currently fails at BlockBasedFileSystems disc-cache, which is not multi-block friendly in design

Hendiadyoin1 avatar Jun 07 '23 14:06 Hendiadyoin1

Can you please explain how you benchmarked this and what the results were?

awesomekling avatar Jun 07 '23 14:06 awesomekling

So due to the reasons outlined, namely the disk-cache, the amount of IO we do is still the same, during file system initialization and super-block-flushing, where we use the raw variants

To check if the code-paths are actually taken I added a dbgln in Ext2FS/Inode::read and ran all tests where for example test-js seemed to be doing 5 block reads at a time now

This only affects sys$read/write to my knowledge

So this is mainly ground work, that requires an architectural change in how we handle IO caching to be fully effective (Technically this might improve performance by reducing the amount of calls between Inode, FS and Device)

Hendiadyoin1 avatar Jun 07 '23 14:06 Hendiadyoin1

I don't believe this groundwork is extensive enough to warrant its own PR without some significant improvement. Can't you get this to reach the disk in a way that actually is measurably faster?

gmta avatar Jun 07 '23 16:06 gmta

I will just put my comment from the discord server here as well: https://discord.com/channels/830522505605283862/830525093905170492/1116063101176008805

[8:56 PM]supercomputer7: disk-cache should be removed [8:56 PM]supercomputer7: it is utterly incapable in its design for what we need [8:56 PM]supercomputer7: what we want is a page cache in the vfs layer [8:58 PM]supercomputer7: the vfs layer knows about the whole fs tree, so it has much more broad view of what's going on [8:58 PM]supercomputer7: it could manage cache in a much more reasonable way [9:00 PM]supercomputer7: therefore I think PR 19303 should be directed at that thing [9:01 PM]supercomputer7: and not at trying to fix something that is broken and should not be there from the beginning...

supercomputer7 avatar Jun 07 '23 18:06 supercomputer7

On that, as hinted at in the same conversation, After removing the disk-cache it turns out that a lot of other parts of the storage device infrastructure does not accomdate IO of more than a page at a time, either silently truncating or just asserting, even sdhc, which is layed out without size assumptions, currently dies in my local experiment on a failed data copy to user, which i need to investigate

Hendiadyoin1 avatar Jun 08 '23 09:06 Hendiadyoin1

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions!

stale[bot] avatar Jun 29 '23 15:06 stale[bot]

Will close this until the InodeCache is reworked/removed and I figure out an architecture to report IO size limitations to a user/overlaying interface and maybe even lower those limitations

Hendiadyoin1 avatar Jun 29 '23 15:06 Hendiadyoin1