benchmark
benchmark copied to clipboard
make MoCO model not distributed across multiple GPUs
The moco model is defined as a distrubuted NN, but we fix it to a single GPU in the benchmarks. Removing all the cross GPU communication for simplicitys sake.
oh, can you also fix default device? And check off the box here: https://github.com/pytorch/benchmark/projects/1
and make sure to raise NotImplemeted for jit? It looks like jit is silently disabled for now.
I could comment out the code rather than delete it, but if/when we want to do distributed benchmarks probably easier to grab the copy from the original repo anyway.
The calls to nn.dist (eg the gathers) won't work if its not a DDP.
What do you mean by fix the default device and check off the project?
ok, deleting or commenting is fine just wanted to ask.
oops, i sent the wrong link. check the moco specific 'nit' in this issue https://github.com/pytorch/benchmark/issues/65
Addressed, closing this PR.