Severely degraded performance after use
So, i am using a 1M SPIFFS partition for data logging and i experience degraded performance after FS becomes dirty. I am attaching visualization and file list of the system for reference: spiffs_vis.txt
While it is expected that filesystem will get into this state after use, there are two problems i see:
- At boot, i run
SPIFFS_gcrepeatedly, specifically to free up space when it's not critical. However, after a few iterations gc just stops doing any work and number of deleted pages stops going down (at which point i stop doing it). Here's the log (note that reads and writes to storage are not logged, only erases are):
[Jan 24 12:02:51.027] mgos_vfs_mount /data: size 956561, used: 536387, free: 420174
[Jan 24 12:02:51.034] find_mount_by_path /data -> /data pl 5 -> 2 0x3ffe2924
[Jan 24 12:02:51.156] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xbc000 = 0
[Jan 24 12:02:51.271] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x8b000 = 0
[Jan 24 12:02:51.412] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x65000 = 0
[Jan 24 12:02:52.394] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x85000 = 0
[Jan 24 12:02:52.543] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x74000 = 0
[Jan 24 12:02:52.550] mgos_vfs_fs_spiffs_g GC result -10001 del pages 1659 -> 1657
[Jan 24 12:02:52.688] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x66000 = 0
[Jan 24 12:02:52.837] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x52000 = 0
[Jan 24 12:02:52.967] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x7b000 = 0
[Jan 24 12:02:53.095] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x91000 = 0
[Jan 24 12:02:53.215] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x8000 = 0
... several more ...
[Jan 24 12:03:07.486] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x67000 = 0
[Jan 24 12:03:07.611] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xdc000 = 0
[Jan 24 12:03:07.733] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x8c000 = 0
[Jan 24 12:03:07.870] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x58000 = 0
[Jan 24 12:03:08.003] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xe0000 = 0
[Jan 24 12:03:08.010] mgos_vfs_fs_spiffs_g GC result -10001 del pages 1582 -> 1582
- Every small write seems to require one, or even several erases. first few writes are fast:
[Jan 24 12:03:18.302] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:18.309] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 9809
[Jan 24 12:03:18.316] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:18.323] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 9824
[Jan 24 12:03:18.331] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:18.338] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 9839
[Jan 24 12:03:18.346] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:18.353] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 9854
but then...
[Jan 24 12:03:18.906] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x6a000 = 0
[Jan 24 12:03:18.915] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:18.922] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10289
[Jan 24 12:03:19.055] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xa1000 = 0
[Jan 24 12:03:19.063] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:19.070] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10304
[Jan 24 12:03:19.189] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x76000 = 0
[Jan 24 12:03:19.197] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:19.204] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10319
[Jan 24 12:03:19.342] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x62000 = 0
[Jan 24 12:03:19.349] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:19.356] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10334
[Jan 24 12:03:19.478] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xc5000 = 0
[Jan 24 12:03:19.485] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:19.492] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10349
and then it gets worse...
[Jan 24 12:03:22.445] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xa8000 = 0
[Jan 24 12:03:22.460] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:22.466] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10619
[Jan 24 12:03:22.594] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xea000 = 0
[Jan 24 12:03:22.721] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xee000 = 0
[Jan 24 12:03:22.857] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xbd000 = 0
[Jan 24 12:03:22.983] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xe9000 = 0
[Jan 24 12:03:23.129] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x38000 = 0
[Jan 24 12:03:23.136] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:23.143] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10634
[Jan 24 12:03:23.277] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xae000 = 0
[Jan 24 12:03:23.404] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xfc000 = 0
[Jan 24 12:03:23.516] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xc6000 = 0
[Jan 24 12:03:23.529] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:23.536] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10649
[Jan 24 12:03:23.658] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xfa000 = 0
[Jan 24 12:03:23.671] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:23.674] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10664
[Jan 24 12:03:23.798] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xb4000 = 0
[Jan 24 12:03:23.931] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xc0000 = 0
[Jan 24 12:03:24.086] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x70000 = 0
[Jan 24 12:03:24.197] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xaa000 = 0
[Jan 24 12:03:24.342] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xa3000 = 0
[Jan 24 12:03:24.349] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:24.356] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10679
[Jan 24 12:03:24.484] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xd6000 = 0
[Jan 24 12:03:24.622] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xde000 = 0
[Jan 24 12:03:24.748] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xeb000 = 0
[Jan 24 12:03:24.871] esp32_vfs_dev_partit fs_data: erase 4096 @ 0xab000 = 0
[Jan 24 12:03:25.005] esp32_vfs_dev_partit fs_data: erase 4096 @ 0x7a000 = 0
[Jan 24 12:03:25.013] mgos_vfs_write 513 15 => 0x3ffe2924:1 => 15
[Jan 24 12:03:25.020] mgos_vfs_lseek 513 0 1 => 0x3ffe2924:1 => 10694
Why would SPIFFS behave like this? I budgeted for and can live with occasional page erases, but why would writing 15 bytes require erase of multiple 4K sectors? Repeatedly? It seems to me that teh steady state should be roughly: write, write, write, ... (up to ~4K), erase+write, write, write, write... (up to 4K).
The file system requires to update metadata as well. IMHO a file system for embedded NANDs should consider "fixed" and "variable sized" files. In a typical "write forward" data logging process, where you never change what is written and have a fixed file size, you may obtain your simple behaviour. But for anything else the overhead is unfortunately necessary.
@rojer maybe if you would attach your (or a) source code that reproduces the problem simply by running it, the author could find the bug easily, it would be nice. Good work uncovering this anyway. 👍