vits
vits copied to clipboard
infinite KL Divergence for low resource language data
Hi, I'm trying to train base VITS case on low resource language, we have prepared 27K data close to LJ settings. But during training, the KL loss converges to infinity and this is because of the huge gap between prior and posterior in all C*T values. Would you please guide me on how to confront this and if it is solvable after more training epochs or not? Thank You