littlefs
littlefs copied to clipboard
Why are there so many block erases?
Hi, I'm testing littlefs for use in an existing embedded system with SPI NOR flash. I'm finding that there are a lot more block erases than I would have guessed, and I'm hoping you can help me to understand why that is.
My lfs_config struct has: .read_size=1 .prog_size=1 .block_size=4096 .block_count=819 <== not a typo; I'm testing with about 1/10 of a flash with 8192 blocks .block_cycles=200 .cache_size=4096 .lookahead_size=128 .name_max=255 .file_max=2147483647 .attr_max=1022
My test code is counting the number of calls to the .read(), .prog() and .erase() functions I've provided in the lfs_config struct.
The test code does this:
- Format the flash with littlefs
- DO 4 times: 2a. Open a new file in the root directory using LFS_O_CREAT | LFS_O_RDWR | LFS_O_TRUNC 2b. Do 2048 times: Write a 64 byte pattern Call lfs_file_sync() 2c. Write a 40 byte pattern 2d. Rewind and read back the file to verify the contents 2e. Close the file
- Report the number of calls of each of .read(), .prog(), and .erase().
I get
reads 307627
writes 16482
erases 8325
Notes:
- 4 test files are written, each of 131112 bytes
- there are 64 calls of lfs_file_sync() plus one call of lfs_file_close() for each file, and between any consecutive two of these calls at most 64 bytes of data are written to a file
I guessed that there would be about 2 writes for each iteration in step 2b - one to append the file data, and one to write an updated LFS_TYPE_CTZSTRUCT tag to the directory - plus some extra for the format and maintaining the directory. So the count of writes makes sense to me.
What I don't understand is why there are so many block erases. With blocks of 4096 bytes (minus 8 for avg. 2 skip-list pointers per block) and writes of 64 data bytes at a time, I guessed there would be around 33 blocks needed per file for the file data, and a swap to the other directory pair block roughly every 10(ish?) times a write happens, for a total of about 433 + 42049/10 ~= 951 blocks erased.
(I do understand that each new file is kept inline in the dir pair for the first few writes, until it grows big enough to get a skip-list. To make the arithmetic easier I'm just assuming it's always in a skip-list since most of the writes occur after it gets a skip-list.)
Can you please explain why there are so many more block erases than I expect?
Thanks
Hello,
With some more searching, I think I've found some answers: #374 https://github.com/littlefs-project/littlefs/issues/374 #344 https://github.com/littlefs-project/littlefs/issues/344#issuecomment-567195031
As far as I understand it, in the general case a flash part could require writes to be some multiple of a write block size which could be > 1. When a file sync or close occurs, the previous write might have made the file end partway into a write block, so that the new write can't start at the previous file end because you can't write just the unused part of that write block. Littlefs assumes this is always the case, and makes a new copy of the last erase-block-sized block of the file so that the new write can start at the previous end of the file.
Have I understood that right?
Thanks
Hi @clinton-r, thanks for raising an issue.
Yep, that's right from what I understand. LittleFS can't necessarily continue to incrementally write to the end of the file in the general case, so it must copy the last block of the file to a new block, erasing the new block in the process.
It's possible to keep writing if we were aligned with the program_size, which is always true when program_size=1, but this risks creating a rather confusing situation for devices where this isn't true, where appends are only sometimes efficient.
There may be some other ways to improve this, such as extending the inline file mechanism to buffer the end of files so writes can be expected to always be aligned, but I don't know if the correct answer is clear yet.
Thanks @geky, appreciated.