litdata icon indicating copy to clipboard operation
litdata copied to clipboard

[WIP] : Fix resume issues with combined streaming dataset in dataloader

Open bhimrazy opened this issue 1 year ago • 6 comments

Before submitting
  • [x] Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • [x] Did you read the contributor guideline, Pull Request section?
  • [x] Did you make sure to update the docs?
  • [x] Did you write any new necessary tests?

How does this PR impact the user?

Currently, users experience issues when attempting to resume a combined streaming dataset with the streaming dataloader, as saving and restoring checkpoints doesn’t work as expected. This PR addresses the root cause of the error, enabling successful checkpoint resuming of the dataloader, ensuring smoother and more reliable training workflows.

What does this PR do?

Fixes #331.

  • [x] Fixed IndexError when loading dataloader state before any iteration.
  • [ ] Enabled resuming dataloader states for combined datasets (non-weighted) [ In progress ].

PR review

Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

bhimrazy avatar Sep 03 '24 19:09 bhimrazy

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 78%. Comparing base (92df8af) to head (242a13c).

Additional details and impacted files
@@         Coverage Diff         @@
##           main   #362   +/-   ##
===================================
  Coverage    78%    78%           
===================================
  Files        34     34           
  Lines      5016   5020    +4     
===================================
+ Hits       3929   3934    +5     
+ Misses     1087   1086    -1     

codecov[bot] avatar Sep 05 '24 04:09 codecov[bot]

Combined Dataset (no weights): Resuming from the complete last epoch iteration is working now, but no luck so far with resuming from a partial last epoch yet (looking into it further.)

bhimrazy avatar Sep 09 '24 12:09 bhimrazy

hi @bhimrazy What's the current update?

deependujha avatar Sep 19 '24 09:09 deependujha

hi @bhimrazy What's the current update?

Hi @deependujha

I'm still facing some issues with an IndexError when loading states from the last partial epoch. This usually only happens when the num of samples exceeds than the actual available samples.

E       IndexError: Caught IndexError in DataLoader worker process 0.
E       Original Traceback (most recent call last):
E         File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 253, in _worker_loop
E           fetcher = _DatasetKind.create_fetcher(dataset_kind, dataset, auto_collation, collate_fn, drop_last)
E         File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 80, in create_fetcher
E           return _utils.fetch._IterableDatasetFetcher(dataset, auto_collation, collate_fn, drop_last)
E         File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 22, in __init__
E           self.dataset_iter = iter(dataset)
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 160, in __iter__
E           self._iterator = _CombinedDatasetIterator(
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 208, in __init__
E           self._dataset_iters = [iter(dataset) for dataset in datasets]
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/combined.py", line 208, in <listcomp>
E           self._dataset_iters = [iter(dataset) for dataset in datasets]
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/dataset.py", line 240, in __iter__
E           self._resume(workers_chunks, workers_intervals)
E         File "/home/runner/work/litdata/litdata/src/litdata/streaming/dataset.py", line [312](https://github.com/Lightning-AI/litdata/actions/runs/10900666975/job/30248775938#step:5:313), in _resume
E           interval = self.worker_intervals[self.chunk_index]
E       IndexError: list index out of range

Initially, I encountered a separate error where the number of samples exceeded the actual count in state dict test. It seemed like the states were accumulating incorrectly between the tests, so I decided to separate the tests and then the states were fine as it should be.

I haven't had much time lately, but I plan to continue working on from this weekend.

bhimrazy avatar Sep 19 '24 10:09 bhimrazy

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

Since your pull request originates from a forked repository, GitGuardian is not able to associate the secrets uncovered with secret incidents on your GitGuardian dashboard. Skipping this check run and merging your pull request will create secret incidents on your GitGuardian dashboard.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
5685611 Triggered Generic High Entropy Secret 398a654990bde84c0cb9b25e5544680ec4a2e846 tests/streaming/test_resolver.py View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

gitguardian[bot] avatar Sep 19 '24 10:09 gitguardian[bot]

Getting close to it:

The test case seems to fail with an IndexError when the number of workers is greater than 2 and the iteration is stopped close to the midpoint of the dataloader length.

image

bhimrazy avatar Sep 22 '24 07:09 bhimrazy

@bhimrazy Is this an active issue? I tried restarting training midway during an epoch last week and was able to continue training, when using a CombinedStreamingDataset.

schopra8 avatar Dec 17 '24 02:12 schopra8

@bhimrazy Is this an active issue? I tried restarting training midway during an epoch last week and was able to continue training, when using a CombinedStreamingDataset.

Thank you, @schopra8, for bringing this to my attention. There’s a test case in this PR that fails for the same issue. I’ll review it with the latest updates.

bhimrazy avatar Dec 17 '24 02:12 bhimrazy

This PR got closed due to an issue with my forked repo.
I will address this issue with a new PR in the near future.

#507

bhimrazy avatar Feb 20 '25 18:02 bhimrazy