gsplat
gsplat copied to clipboard
Problem for multigpu training about sampler
It is common to use dataloader with distributed sampler when training with multi gpus. So why not use a distributed sampler in examples/simple_trainer.py, is that for any reason?
I'm not quite familiar with DistributedSampler. What's the benefit of using that?
It allows to use only a subset of the original data for each process, maybe thus decrease iterations needed to traverse the full dataset?