torchtitan icon indicating copy to clipboard operation
torchtitan copied to clipboard

Loss curve spikes on amalagamated datasets - need full scale shuffler in dataloader

Open lessw2020 opened this issue 11 months ago • 5 comments

As part of e2e training, encountered wild loss curve spikes:

Screenshot 2024-03-07 at 8 40 55 PM

After additional hyperparam tuning and further investigation, the root cause is that we are reading the dataset sequentially, so to the model, it sees data type A...learns and improves, then hits data type B...suprised (spikes) but then learns and improves, repeat.

By training with a 'single data source' dataset, in this case openwebtext, we see a very nice loss curve on e2e training, showcasing that the issue is the lack of shuffling: Screenshot 2024-03-12 at 9 50 57 AM

lessw2020 avatar Mar 12 '24 18:03 lessw2020

@tianyu-l @lessw2020 FYI, I am using this trick.

  hf_ds = HuggingFaceDataset(
      dataset_name, dataset_path, tokenizer, seq_len, world_size, rank, infinite
  )
  if shuffle:
      hf_ds._data = hf_ds._data.shuffle(seed=int(rank*10007+int(time.time())))

XinDongol avatar May 08 '24 21:05 XinDongol

@XinDongol Why would you shuffle the dataset with that seed? Now that Stateful DataLoaders will merge soon, you won't be able to resume training from a crash properly because you don't know how you shuffled the dataset.

Random seeds are used to ensure that results are reproducible, in this case it's completely the opposite.

TJ-Solergibert avatar May 10 '24 21:05 TJ-Solergibert

  hf_ds = HuggingFaceDataset(
      dataset_name, dataset_path, tokenizer, seq_len, world_size, rank, infinite
  )
  if shuffle:
      hf_ds._data = hf_ds._data.shuffle(seed=int(rank*10007+int(time.time())))

@XinDongol For map-style dataset, this works as expected. However, for IterableDataset a buffer is used to create apply randomness within. The issue won't be fixed if the buffer size is not / cannot be large enough to cover different amalgamated datasets.

@TJ-Solergibert Checkpointing the random seeds used to shuffle the dataset would solve the problem. FYI it is on our roadmap.

tianyu-l avatar May 14 '24 01:05 tianyu-l

Thanks for your answer @tianyu-l , it makes sense 😅

I was wondering, any idea to not use .skip() when resuming training? In my setup (& colab), skipping 10000000 samples took 90s approximately.

from datasets import load_dataset
ds = load_dataset("allenai/c4", name="en", split="train", streaming=True)
ds = ds.skip(10000000)
ds = iter(ds)
next(ds)

TJ-Solergibert avatar May 14 '24 20:05 TJ-Solergibert

I was wondering, any idea to not use .skip() when resuming training? In my setup (& colab), skipping 10000000 samples took 90s approximately.

@TJ-Solergibert

  1. We should use .skip() when resuming training. In fact, it has been put into #279.
  2. It doesn't mean this is the ideal solution. E.g., the C4 en section has more than 300M entries, which, according to your example, means over 45min to skip if we stop somewhere towards the end of the dataset. Ideally, even for streaming=True IterableDataset, skip should be able to directly seek the file position. As far as we know this is something HF is working on.

tianyu-l avatar May 15 '24 03:05 tianyu-l