champ
champ copied to clipboard
Guidance Encoder received nan gradient during training
When I look in to each parameter's gradient, I found all attention blocks except for the last one in guidance encoder get nan gradient. If not use accelerate just torch distributed, this will lead to error. And then after looking into the encoder's code, I found lots of attention block are initialized, saved or loaded but only the last layer is used in forward process.