mmselfsup icon indicating copy to clipboard operation
mmselfsup copied to clipboard

Problems when applying the DINO method

Open InfinityBox opened this issue 1 year ago • 0 comments

Hi, Following the implementation for DINO in projects/dino/, I got the error below:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 0: 157 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error.

InfinityBox avatar Jul 07 '23 12:07 InfinityBox