torchtitan icon indicating copy to clipboard operation
torchtitan copied to clipboard

IBM experimental dataloaders

Open daviswer opened this issue 1 year ago • 5 comments

This PR introduces an experimental PyTorch-native dataloader from IBM that is distributed, stateful, checkpointable, composable and rescalable. It is intended for use in large-scale model pretraining, particularly in research settings where rapid iteration between datasets may be required. It automatically and invisibly handles data sharding, shuffling, subdataset weighting, checkpoint saving and loading, and more, with minimal overhead and high throughput.

  • Add experimental dataset source file
  • Add experimental dataloader builder, hooked into torchtitan cfg
  • Update torchtitan cfg with additional dataset arg fields
  • Update train script to build experimental dataloader instead of hf depending on cfg flags
  • Replace the existing C4-mini example dataset with one that matches the expected formatting for the experimental dataloader
  • TODO: port over unit tests as well
  • TODO: preprocessing script(s) for the new dataset format
  • TODO: further cleanup/iteration

daviswer avatar May 31 '24 07:05 daviswer

Hi @daviswer!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

facebook-github-bot avatar May 31 '24 07:05 facebook-github-bot

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

facebook-github-bot avatar May 31 '24 07:05 facebook-github-bot

Thanks for the PR! Reviewing it now

gokulavasan avatar Jun 03 '24 18:06 gokulavasan

Thanks for taking a look! Left a bunch of responses, hopefully this brings clarity

daviswer avatar Jun 05 '24 22:06 daviswer

Updating the dataloader to match the latest version in our public repo. Re-syncs to latest main, and incorporates many previously discussed features:

  • Remove dependence on the separate metadata "count file" in the dataset directory
  • Support n_workers > 1. This is accomplished by shunting all path/rank-dependent setup out of initialization and into a new setup() method, which runs after init but before any other op
  • Support HF-style parquet raw text datasets, with tokenization on the fly (for reasonably sized documents/shardfiles)
  • Support non-flat data directories: all legal files under the specified location will be included, regardless of depth or location. Enables simple weight-free dataset mixing via a single StreamingDocDataset on the parent directory
  • SamplingDataset and ScalableShardDatasetare now implemented as proper _WrapperDatasets, reflecting the intended modular usage
  • Fix Weird_Separated_Camel_Case naming convention in favor of ProperClassNaming
  • Allow PreloadBufferDataset to shrink back down to the desired size after rescaling to a smaller number of workers

daviswer avatar Aug 13 '24 21:08 daviswer