data icon indicating copy to clipboard operation
data copied to clipboard

`len(dataloader)` in distributed setting is different with datapipes and with map-style datasets

Open NicolasHug opened this issue 2 years ago • 2 comments

In a distributed setting, len(dataloader) will return:

  • len(dataset) // (batch_size * num_GPUs) if dataset is a map-style dataset
  • len(dataset) // batch_size if dataset is a datapipe

This discrepancy makes it a bit difficult to work with torchvision's training recipes, where we often need the size of the dataloader.

Below is an illustration of this discrepancy - you can run the snippet (even without a GPU) with torchrun --nproc_per_node 4 script.py

# Run this with e.g. `torchrun --nproc_per_node 4 script.py`
import torch.utils.data as data
import torch.distributed as dist
import torchdata


def replace_print():
    import builtins as __builtin__
    builtin_print = __builtin__.print
    def print(*args, **kwargs):
        if dist.get_rank() == 0:
            builtin_print(f"[GPU 0]", *args, **kwargs)

    __builtin__.print = print


# Setting up DDP - you can ignore this
dist.init_process_group(backend="gloo")
replace_print()
dist.barrier()


size = 800
dp = torchdata.datapipes.iter.IterableWrapper(range(size)).sharding_filter()
dl = data.DataLoader(dp, batch_size=10, num_workers=4, drop_last=True)
print(f"with dp, {len(dl) = }")
# Gives : 80

ds = list(range(size))
dl = data.DataLoader(ds, batch_size=10, num_workers=4, drop_last=True, sampler=data.DistributedSampler(ds, shuffle=False))
print(f"with mapstyle, {len(dl) = }")
# Gives: 20

NicolasHug avatar Jun 22 '22 16:06 NicolasHug

Thank you for opening the issue. It's kind easy to fix but need to consider all the use cases. If users don't specify sharding_filter in the pipeline, the length should be len(dataset) * num_GPUs // batch_size.

I do want to understand when you need the size of dataloader? is this related to the meta data for each Dataset?

ejguan avatar Jun 22 '22 16:06 ejguan

If users don't specify sharding_filter in the pipeline, the length should be len(dataset) * num_GPUs // batch_size.

I agree. Interestingly, with map-style datasets, len(dataset) is equal to len(dataset) // batch_size if users don't pass sampler=DistributedSampler(), which is equivalent to not calling .sharding_filter(). But I think len(dataset) * num_GPUs // batch_size as you proposed makes more sense.

I do want to understand when you need the size of dataloader?

We rely on the size for our logger, which is how I found out about the discrepancy:

https://github.com/pytorch/vision/blob/59c4de9123eb1d39bb700f7ae7780fb9c7217910/references/classification/train.py#L25 https://github.com/pytorch/vision/blob/59c4de9123eb1d39bb700f7ae7780fb9c7217910/references/classification/utils.py#L109

is this related to the meta data for each Dataset?

No, not directly. But I'm still looking into convenient ways to specify the length of the torchvision datapipes. I'll definitely come back to you on this when this is clearer for me.

NicolasHug avatar Jun 22 '22 16:06 NicolasHug