returnn icon indicating copy to clipboard operation
returnn copied to clipboard

SlowMo (BMUF) support for PyTorch distributed training

Open albertz opened this issue 1 year ago • 0 comments

This is for the parameter averaging method in distributed training. The SlowMo method adds an additional momentum which is used for the outer loop updates (i.e. after param averaging).

Original fairscale code. Code also in Fairseq.

The method is actually conceptually the same as BMUF. Only some of the experiments in the SlowMo paper go a bit beyond that.

  • Chen and Huo, “Scalable Training of Deep Learning Machines by Incremental Block Training with Intra-Block Parallel Optimization and Blockwise Model-Update Filtering.” (BMUF), ICASSP 2016

albertz avatar Jun 26 '24 08:06 albertz