torch-pesq
torch-pesq copied to clipboard
Training Instability with torch_pesq: Loss Gradually Becomes NaN
When training my model with torch_pesq integrated, the loss gradually drifts to NaN, causing gradients to break and training to halt.
I am using this along with MultiResolutionSpectralLoss as follows:
def __init__(**):
...
self.pesq_loss = PesqLoss(sample_rate=self.target_sr, factor=10)
def training_step(**):
...
loss_pesq = self.pesq_loss(wav.squeeze(1), wav_hat.squeeze(1)).mean()
recontstruction_loss = loss_mrl + loss_pesq
...
I’ve experimented with various gradient clipping values, but the issue persists. I'm unsure why the loss continues to drift toward NaN. Any insights or suggestions would be greatly appreciated.
Not sure what may be the cause, but can you try combining with a simple MSE loss first and see whether it still diverges?