pytorch-pcen
pytorch-pcen copied to clipboard
nan during training
thanks for sharing this.
I am trying to train PCEN in a torch layer. However, at unpredictable points during training, the loss collapses to nan when this is introduced. I'm thinking that this must be due to certain parameter configuration, but I cannot trace the bug.
Thanks
Is this with a CTC objective? Have you isolated the issue to PCEN? Is the PCEN frontend trainable, or is it fixed?