StyleAvatar icon indicating copy to clipboard operation
StyleAvatar copied to clipboard

Distributed training error

Open Jerry-Master opened this issue 2 years ago • 0 comments

When I launch the distributed training for the full styleavatar I get the following error:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by 
making sure all `forward` function outputs participate in calculating loss. 
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

Everything is fine for training in a single machine. Any clue on what is happening?

I managed to fix it by passing find_unused_parameters=True to every DistributedDataParallel constructor but it gives me several warnings so I prefer to open an issue rather than a pull request.

Jerry-Master avatar Aug 03 '23 12:08 Jerry-Master