litdata
litdata copied to clipboard
Writing / Reading Bug involving writer `chunk_bytes` information
🐛 Bug
When writing chunks, chunk_bytes
is calculated via https://github.com/Lightning-AI/litdata/blob/b9aa903bd9c98cd96ee989394fdaa1a38f8036f0/src/litdata/streaming/writer.py#L237 but the actual data size is more since data contains additional (potentially large) metadata info in the beginning
When reading chunks, there is a separate thread to download the chunks from cloud and a while loop that spins until the file size is larger than chunk_bytes
see https://github.com/Lightning-AI/litdata/blob/b9aa903bd9c98cd96ee989394fdaa1a38f8036f0/src/litdata/streaming/item_loader.py#L146
This means that there are edge cases where the reader is downloading the file, and the file exceeds chunk_bytes
since the file is a larger size than that. The reader thinks the file is ready and indexes into an offset that doesn't exist yet, leading to downstream errors.
To Reproduce
Since this is non deterministic, and involves large data, I don't have code, but if I can outline my scenario. You create large chunks (I'm using default of 64 MB), and then you index through the last data point of each chunk (I have > 100 chunks), you'll most likely hit this issue.
Maybe if you have even larger chunks with a lot of data, as long as the offset stored in the chunk is sufficiently large (since that doesn't get accounted for in the chunk_bytes
info, and you index the last element, you'll probably see it is my guess.
Expected behavior
This should work. I am happy to make a PR but unsure which direction to pursue. Several ideas:
- In the writer logic, set
chunk_bytes
to be the actual file size rather than just the size of data points. This is obviously the easiest but I'm not sure if this info is used somewhere else. - Rewrite the reader logic such that it waits until the file size stops changing. A bit nastier.
- Use the FileLocks you have for downloading and wait for them to be released or something? Haven't used FileLocks before so I can't comment more on this.