[WIP] : Fix resume issues with combined streaming dataset in dataloader
Before submitting
- [x] Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
- [x] Did you read the contributor guideline, Pull Request section?
- [x] Did you make sure to update the docs?
- [x] Did you write any new necessary tests?
How does this PR impact the user?
Currently, users experience issues when attempting to resume a combined streaming dataset with the streaming dataloader, as saving and restoring checkpoints doesn’t work as expected. This PR addresses the root cause of the error, enabling successful checkpoint resuming of the dataloader, ensuring smoother and more reliable training workflows.
What does this PR do?
Fixes #331.
- [x] Fixed IndexError when loading dataloader state before any iteration.
- [ ] Enabled resuming dataloader states for combined datasets (non-weighted) [ In progress ].
PR review
Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 78%. Comparing base (
92df8af) to head (242a13c).
Additional details and impacted files
@@ Coverage Diff @@
## main #362 +/- ##
===================================
Coverage 78% 78%
===================================
Files 34 34
Lines 5016 5020 +4
===================================
+ Hits 3929 3934 +5
+ Misses 1087 1086 -1
Combined Dataset (no weights): Resuming from the complete last epoch iteration is working now, but no luck so far with resuming from a partial last epoch yet (looking into it further.)
hi @bhimrazy What's the current update?
hi @bhimrazy What's the current update?
Hi @deependujha
I'm still facing some issues with an IndexError when loading states from the last partial epoch. This usually only happens when the num of samples exceeds than the actual available samples.
E IndexError: Caught IndexError in DataLoader worker process 0.
E Original Traceback (most recent call last):
E File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 253, in _worker_loop
E fetcher = _DatasetKind.create_fetcher(dataset_kind, dataset, auto_collation, collate_fn, drop_last)
E File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 80, in create_fetcher
E return _utils.fetch._IterableDatasetFetcher(dataset, auto_collation, collate_fn, drop_last)
E File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 22, in __init__
E self.dataset_iter = iter(dataset)
E File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 160, in __iter__
E self._iterator = _CombinedDatasetIterator(
E File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 208, in __init__
E self._dataset_iters = [iter(dataset) for dataset in datasets]
E File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 208, in <listcomp>
E self._dataset_iters = [iter(dataset) for dataset in datasets]
E File "/home/runner/work/litdata/litdata/src/litdata/streaming/dataset.py", line 240, in __iter__
E self._resume(workers_chunks, workers_intervals)
E File "/home/runner/work/litdata/litdata/src/litdata/streaming/dataset.py", line [312](https://github.com/Lightning-AI/litdata/actions/runs/10900666975/job/30248775938#step:5:313), in _resume
E interval = self.worker_intervals[self.chunk_index]
E IndexError: list index out of range
Initially, I encountered a separate error where the number of samples exceeded the actual count in state dict test. It seemed like the states were accumulating incorrectly between the tests, so I decided to separate the tests and then the states were fine as it should be.
I haven't had much time lately, but I plan to continue working on from this weekend.
⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.
Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.
Since your pull request originates from a forked repository, GitGuardian is not able to associate the secrets uncovered with secret incidents on your GitGuardian dashboard. Skipping this check run and merging your pull request will create secret incidents on your GitGuardian dashboard.
🔎 Detected hardcoded secret in your pull request
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 5685611 | Triggered | Generic High Entropy Secret | 398a654990bde84c0cb9b25e5544680ec4a2e846 | tests/streaming/test_resolver.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Getting close to it:
The test case seems to fail with an IndexError when the number of workers is greater than 2 and the iteration is stopped close to the midpoint of the dataloader length.
@bhimrazy Is this an active issue? I tried restarting training midway during an epoch last week and was able to continue training, when using a CombinedStreamingDataset.
@bhimrazy Is this an active issue? I tried restarting training midway during an epoch last week and was able to continue training, when using a CombinedStreamingDataset.
Thank you, @schopra8, for bringing this to my attention. There’s a test case in this PR that fails for the same issue. I’ll review it with the latest updates.
This PR got closed due to an issue with my forked repo.
I will address this issue with a new PR in the near future.
#507