tez icon indicating copy to clipboard operation
tez copied to clipboard

Ddp kwargs

Open nbroad1881 opened this issue 2 years ago • 0 comments

I made a custom model and I got this error:

RuntimeError: Expected to have finished reduction in the prior iteration before 
starting a new one. This error indicates that your module has parameters that 
were not used in producing loss. You can enable unused parameter detection by 
passing the keyword argument `find_unused_parameters=True` to 
`torch.nn.parallel.DistributedDataParallel`, 
and by making sure all `forward` function outputs participate in calculating loss. 

If you already have done the above, then the distributed data parallel module 
wasn't able to locate the output tensors in the return value of your module's 
`forward` function. Please include the loss function and the structure of the 
return value of `forward` of your module when reporting this issue (e.g. list, 
dict, iterable).

I found this issue in the accelerate repo which indicated that the solution was to do the following

from accelerate import DistributedDataParallelKwargs

ddp_kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
accelerator = Accelerator(kwargs_handlers=[ddp_kwargs])

I added another argument to Tez that allows the user to set that.

nbroad1881 avatar Jan 29 '23 16:01 nbroad1881