embedded-sdmmc-rs icon indicating copy to clipboard operation
embedded-sdmmc-rs copied to clipboard

Fix closing `File`s and `Directories`

Open jbeaurivage opened this issue 2 years ago • 4 comments

This PR attempts to fix closing Files and Directories.

The working principle is that each new File and Directory is assigned a unique SearchId. To close/delete them, instead of looking for magic Cluster numbers, we simply need to lookup the ID in the list of open files or directories. This mitigates the issues where we might forget to update the cluster number, or where there might be multiple cluster numbers with the same value (most notably, 0).

Also replace raw arrays with heapless::Vecs for a cleaner implementation.

Optionally, this could remove the need for a &Volume parameter in the close_dir and close_file methods.

@ostenning and @thejpster, I would very much like to know what you think of this. I'm not super familiar with the internals of this library so I'd definitely like a good review.

Hopefully fixes #66.

jbeaurivage avatar Oct 17 '22 22:10 jbeaurivage

As far as I recall, I picked cluster numbers because every directory and every file must be located within a unique cluster number, and there are no duplicates across any other file or directory on that Volume.

thejpster avatar Oct 18 '22 18:10 thejpster

Hmm. As noted in the issue, I can see a problem where the volume is corrupt and multiple directory entries point to a file with the same starting cluster.

thejpster avatar Oct 18 '22 18:10 thejpster

As far as I recall, I picked cluster numbers because every directory and every file must be located within a unique cluster number, and there are no duplicates across any other file or directory on that Volume.

In that case, seems like sticking with Vec makes sense, but keeping the ID generator shouldn't be necessary. However as mentioned in #66, files are initially created with a cluster number of 0, which makes them not searchable within the Vec until they have a "real" cluster number assigned. I added code to update the cluster number in the open files list when one gets assigned, but that doesn't fully fix the problem since there's a transient state where the file still is assigned to cluster 0.

Unfortunately this is where my knowledge of FAT is too limited to move forward; can't files be assigned a real cluster number as soon as they're created, and not when first written to?

jbeaurivage avatar Oct 18 '22 21:10 jbeaurivage

Unfortunately this is where my knowledge of FAT is too limited to move forward; can't files be assigned a real cluster number as soon as they're created, and not when first written to?

Yes it's probably fine to allocate the first cluster when you create a new file.

thejpster avatar Oct 25 '22 11:10 thejpster

@jbeaurivage is there any progress on this PR? I've found some other issues related to opening/closing files which you can check here: #75

elpiel avatar Feb 24 '23 08:02 elpiel

@elpiel unfortunately I haven't had the time to jump back into that issue. Thanks for reporting #75, I'll try to look at that when I finally manage to work on this.

jbeaurivage avatar Feb 24 '23 18:02 jbeaurivage

All that is left to do at this point is trying to do away with the IdGenerator and try to have unique clusters for every file and directory that is created, instead of going through the zero-cluster first.

jbeaurivage avatar Jun 11 '23 02:06 jbeaurivage

Looking good! If we don't have any tests for opening multiple files, or opening the same file twice (the second open should fail), we should add some.

thejpster avatar Jun 11 '23 09:06 thejpster

If we can resolve the two nits, I'd be inclined to merge this and maybe look at making IdGenerator go away later.

thejpster avatar Jun 11 '23 09:06 thejpster

Sounds good! All fixed up. By the way, there is a test that attempts to open the same file twice. Up until the last commit that test was failing (file could be opened twice without errors), but it passes now, so we're covered there :)

jbeaurivage avatar Jun 11 '23 11:06 jbeaurivage