RIOT icon indicating copy to clipboard operation
RIOT copied to clipboard

pkg: add FlashDB

Open benpicco opened this issue 3 years ago • 5 comments

Contribution description

This adds a package for FlashDB, an embedded time series and key-value data base.

It can either operate directly on a raw MTD device (flashdb_mtd) or use the VFS layer for file based operation.

Testing procedure

I added a test that uses the examples provided by FlashDB.

tests/pkg_flashdb
2022-02-05 00:21:09,418 # main(): This is RIOT! (Version: 2022.04-devel-117-gda35d-pkg/flashdb)
2022-02-05 00:21:09,591 # ==================== kvdb_basic_sample ====================
2022-02-05 00:21:09,601 # get the 'boot_count' value is 1
2022-02-05 00:21:11,210 # set the 'boot_count' value to 2
2022-02-05 00:21:11,215 # ===========================================================
2022-02-05 00:21:11,220 # ================= kvdb_type_string_sample =================
2022-02-05 00:21:12,636 # create the 'temp' string KV, value is: 36C
2022-02-05 00:21:12,642 # get the 'temp' value is: 36C
2022-02-05 00:21:14,057 # set 'temp' value to 38C
2022-02-05 00:21:14,340 # delete the 'temp' finish
2022-02-05 00:21:14,346 # ===========================================================
2022-02-05 00:21:14,351 # ================== kvdb_type_blob_sample ==================
2022-02-05 00:21:15,241 # create the 'temp' blob KV, value is: 36
2022-02-05 00:21:15,272 # get the 'temp' value is: 36
2022-02-05 00:21:16,689 # set 'temp' value to 38
2022-02-05 00:21:16,972 # delete the 'temp' finish
2022-02-05 00:21:16,977 # ===========================================================
2022-02-05 00:21:17,013 # ======================= tsdb_sample =======================
2022-02-05 00:21:17,017 # append the new status.temp (36) and status.humi (85)
2022-02-05 00:21:17,022 # append the new status.temp (38) and status.humi (90)
2022-02-05 00:21:17,028 # [query_cb] queried a TSL: time: 19, temp: 36, humi: 85
2022-02-05 00:21:17,034 # [query_cb] queried a TSL: time: 21, temp: 38, humi: 90
2022-02-05 00:21:17,060 # [query_by_time_cb] queried a TSL: time: 19, temp: 36, humi: 85
2022-02-05 00:21:17,066 # [query_by_time_cb] queried a TSL: time: 21, temp: 38, humi: 90
2022-02-05 00:21:17,084 # query count is: 0
2022-02-05 00:21:17,093 # set the TSL (time 19) status from 3 to 3
2022-02-05 00:21:17,374 # set the TSL (time 21) status from 3 to 3
2022-02-05 00:21:17,673 # ===========================================================
tests/pkg_flashdb_mtd
2022-02-05 00:12:48,097 # main(): This is RIOT! (Version: 2022.04-devel-118-g80d9a-pkg/flashdb)
2022-02-05 00:12:48,104 # [I/FAL] Flash Abstraction Layer (V0.5.0) initialize success.
2022-02-05 00:12:48,117 # ==================== kvdb_basic_sample ====================
2022-02-05 00:12:48,120 # get the 'boot_count' value is 12
2022-02-05 00:12:48,132 # set the 'boot_count' value to 13
2022-02-05 00:12:48,137 # ===========================================================
2022-02-05 00:12:48,143 # ================= kvdb_type_string_sample =================
2022-02-05 00:12:48,152 # create the 'temp' string KV, value is: 36C
2022-02-05 00:12:48,155 # get the 'temp' value is: 36C
2022-02-05 00:12:48,157 # set 'temp' value to 38C
2022-02-05 00:12:48,160 # delete the 'temp' finish
2022-02-05 00:12:48,165 # ===========================================================
2022-02-05 00:12:48,170 # ================== kvdb_type_blob_sample ==================
2022-02-05 00:12:48,180 # create the 'temp' blob KV, value is: 36
2022-02-05 00:12:48,182 # get the 'temp' value is: 36
2022-02-05 00:12:48,185 # set 'temp' value to 38
2022-02-05 00:12:48,187 # delete the 'temp' finish
2022-02-05 00:12:48,192 # ===========================================================
2022-02-05 00:12:48,198 # ======================= tsdb_sample =======================
2022-02-05 00:12:48,203 # append the new status.temp (36) and status.humi (85)
2022-02-05 00:12:48,207 # append the new status.temp (38) and status.humi (90)
2022-02-05 00:12:48,212 # [query_cb] queried a TSL: time: 1, temp: 36, humi: 85
2022-02-05 00:12:48,217 # [query_cb] queried a TSL: time: 1, temp: 38, humi: 90
2022-02-05 00:12:48,222 # [query_by_time_cb] queried a TSL: time: 1, temp: 36, humi: 85
2022-02-05 00:12:48,228 # [query_by_time_cb] queried a TSL: time: 1, temp: 38, humi: 90
2022-02-05 00:12:48,229 # query count is: 0
2022-02-05 00:12:48,233 # set the TSL (time 1) status from 3 to 3
2022-02-05 00:12:48,236 # set the TSL (time 1) status from 3 to 3
2022-02-05 00:12:48,242 # ===========================================================

Issues/PRs references

includes #17341 for convenience

benpicco avatar Feb 03 '22 17:02 benpicco

I like this one :) The CI has some comments.

maribu avatar Feb 03 '22 18:02 maribu

One question out of curiosity, the GitHub page sys its supports wear balance, do you know how that is done? I was trying to find some information to estimate flash wear out when using the API but didn't find a section on that immediately.

fjmolinas avatar Feb 08 '22 08:02 fjmolinas

I think python needs a rebase to be fixed?

fjmolinas avatar Mar 25 '22 15:03 fjmolinas

I added a flashdb_vfs module to make the naming more consistent. Now flashdb_mtd selects the MTD backend and flashdb_vfs the file backend.

Also no point in keeping the MTD commit separate.

benpicco avatar Mar 26 '22 19:03 benpicco

Ping?

benpicco avatar Sep 14 '22 14:09 benpicco

Code / integration looks fine to me. Testing delegated to the included test applications. Didn't look at the Kconfig. @chrysn please come take another look!

kaspar030 avatar Dec 13 '22 09:12 kaspar030

Also is it correct that flashdb_vfs and flashdb_mtd are mutually exclusive?

MrKevinWeiss avatar Dec 21 '22 14:12 MrKevinWeiss

will it always be the user who wants to select which backend

yes. even if you have a file system on e.g. the SD card, you still might want to use FlashDB on the internal flash instead - or not

Also is it correct that flashdb_vfs and flashdb_mtd are mutually exclusive?

Yes

benpicco avatar Dec 21 '22 16:12 benpicco

@kaspar030 or @chrysn since you started the review, do you have time for this? Otherwise I could take over.

maribu avatar Jan 10 '23 09:01 maribu

do you have time for this?

I guess that's a no

benpicco avatar Feb 08 '23 12:02 benpicco

I'm very sorry: my time is terribly hard to predict right now, and I'm subscribed to too many issues (maybe I can make posts-that-ping-me highlighted among the others?). I'd appreciate if someone else could continue reviewing this.

chrysn avatar Feb 08 '23 12:02 chrysn

I think there should be some kind of warning that this will not do wear leveling and advising a solution like littlefs2.

Whaat? I thought that was one of it's selling points.

benpicco avatar Feb 23 '23 21:02 benpicco

I think there should be some kind of warning that this will not do wear leveling and advising a solution like littlefs2.

Whaat? I thought that was one of it's selling points.

maybe i missed something or the wearleveling is done another way than in littlefs2. The memory-file for (mtd) on native has one instance of the datebase after multiple runs. The littlefs2 memoryfile (vfs version) has the database multiple times (every time written to another place) Maybe this does not level wear across reboot?

kfessel avatar Feb 23 '23 21:02 kfessel

seems like the compactness of the written data (littlefs2 is more spacey when writing to different places) in the mtd threw me off -> it writes to different places (levels the wear)

kfessel avatar Feb 23 '23 23:02 kfessel

Murdock results

:heavy_check_mark: PASSED

674e3adb65dfbede6b2bbc0a2d3aa16df3cc1b95 tests/pkg_flashdb: add test for FlashDB

Success Failures Total Runtime
6887 0 6888 09m:31s

Artifacts

riot-ci avatar Feb 24 '23 00:02 riot-ci

I had another look at the writes (native mtd debug)-> only writes are happening no erases (i assume until rollover)

There is a huge number of small writes (sometimes to the same location but no erase) the test has a increasing number of writes with every run as the database increases with each run by some tvdb entries. I think this is ok.

grep write 
mtd_native: write from page 4, offset 0xa3 count 1
mtd_native: write from page 5, offset 0x47 count 1
mtd_native: write from page 5, offset 0x4b count 20
mtd_native: write from page 5, offset 0x5f count 10
mtd_native: write from page 5, offset 0x69 count 4
mtd_native: write from page 5, offset 0x47 count 1
mtd_native: write from page 4, offset 0xa3 count 1
mtd_native: write from page 5, offset 0x6d count 1
mtd_native: write from page 5, offset 0x71 count 20
mtd_native: write from page 5, offset 0x85 count 4
mtd_native: write from page 5, offset 0x89 count 3
mtd_native: write from page 5, offset 0x6d count 1
mtd_native: write from page 5, offset 0x6d count 1
mtd_native: write from page 5, offset 0x8c count 1
mtd_native: write from page 5, offset 0x90 count 20
mtd_native: write from page 5, offset 0xa4 count 4
mtd_native: write from page 5, offset 0xa8 count 3
mtd_native: write from page 5, offset 0x8c count 1
mtd_native: write from page 5, offset 0x6d count 1
mtd_native: write from page 5, offset 0x8c count 1
mtd_native: write from page 5, offset 0xab count 1
mtd_native: write from page 5, offset 0xaf count 20
mtd_native: write from page 5, offset 0xc3 count 4
mtd_native: write from page 5, offset 0xc7 count 4
mtd_native: write from page 5, offset 0xab count 1
mtd_native: write from page 5, offset 0xab count 1
mtd_native: write from page 5, offset 0xcb count 1
mtd_native: write from page 5, offset 0xcf count 20
mtd_native: write from page 5, offset 0xe3 count 4
mtd_native: write from page 5, offset 0xe7 count 4
mtd_native: write from page 5, offset 0xcb count 1
mtd_native: write from page 5, offset 0xab count 1
mtd_native: write from page 5, offset 0xcb count 1
mtd_native: write from page 21, offset 0x8 count 1
mtd_native: write from page 21, offset 0xc count 12
mtd_native: write from page 2f, offset 0x88 count 8
mtd_native: write from page 21, offset 0x8 count 1
mtd_native: write from page 21, offset 0x18 count 1
mtd_native: write from page 21, offset 0x1c count 12
mtd_native: write from page 2f, offset 0x80 count 8
mtd_native: write from page 21, offset 0x18 count 1
mtd_native: write from page 20, offset 0x28 count 1
mtd_native: write from page 20, offset 0x38 count 1
mtd_native: write from page 20, offset 0x48 count 1
mtd_native: write from page 20, offset 0x58 count 1
mtd_native: write from page 20, offset 0x68 count 1
mtd_native: write from page 20, offset 0x78 count 1
mtd_native: write from page 20, offset 0x88 count 1
mtd_native: write from page 20, offset 0x98 count 1
mtd_native: write from page 20, offset 0xa8 count 1
mtd_native: write from page 20, offset 0xb8 count 1
mtd_native: write from page 20, offset 0xc8 count 1
mtd_native: write from page 20, offset 0xd8 count 1
mtd_native: write from page 20, offset 0xe8 count 1
mtd_native: write from page 20, offset 0xf8 count 1
mtd_native: write from page 21, offset 0x8 count 1
mtd_native: write from page 21, offset 0x18 count 1

kfessel avatar Feb 24 '23 10:02 kfessel

Thank you for the review!

bors merge

benpicco avatar Feb 27 '23 20:02 benpicco

Build failed (retrying...):

bors[bot] avatar Feb 27 '23 21:02 bors[bot]

Canceled.

bors[bot] avatar Feb 27 '23 22:02 bors[bot]

bors merge

benpicco avatar Feb 28 '23 00:02 benpicco

:clock1: Waiting for PR status (GitHub check) to be set, probably by CI. Bors will automatically try to run when all required PR statuses are set.

bors[bot] avatar Feb 28 '23 00:02 bors[bot]

bors merge

benpicco avatar Feb 28 '23 00:02 benpicco

Build succeeded:

bors[bot] avatar Feb 28 '23 08:02 bors[bot]