sd-scripts icon indicating copy to clipboard operation
sd-scripts copied to clipboard

Accelerate: fix get_trainable_params in controlnet-llite training

Open aria1th opened this issue 9 months ago • 1 comments

aria1th avatar May 07 '24 09:05 aria1th

[rank1]:     if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
[rank1]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by 
[rank1]: making sure all `forward` function outputs participate in calculating loss. 
[rank1]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
[rank1]: Parameter indices which did not receive grad for rank 1: 1 2 3 4 11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44 51 52 53 54 61 62 63 64 71 72 73 74 81 82 83 84 91 92 93 94 101 102 103 104 111 112 113 114 121 122 123 124 131 132 133 134 141 142 143 144 151 152 153 154 161 162 163 164 165 166 173 174 175 176 177 178 185 186 187 188 189 190 197 198 199 200 201 202 209 210 211 212 213 214 221 222 223 224 225 226 ...
[rank1]:  In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

This seems to be very broken, the training script itself should have been fixed but it has other problems...

aria1th avatar May 08 '24 00:05 aria1th

I test fixed part it works.

sdbds avatar May 16 '24 15:05 sdbds

Thank you for this. In my environment, training lllite works fine without this fix. However, this should be fixed.

This seems to be very broken, the training script itself should have been fixed but it has other problems...

The rror seems to be occured with multi GPU training. I will make an investigation for this, please train with a single GPU for now.

kohya-ss avatar May 19 '24 07:05 kohya-ss