WaveRNN
WaveRNN copied to clipboard
add mixed precision training
This pull request adds option to enable mixed precision training by using torch.cuda.amp package.
I can't figure out why but for Tacotron, mixed precision decreases training speed so I recommend to keep tts_use_mixed_precision
opion disabled. For WaveRNN I had ~30% training speed boost (2.5 steps/sec vs 3.2 steps/sec on RTX2080s/custom dataset) and reduced GPU memory usage by ~40% (3.37GB vs 2.43GB) (voc_use_mixed_precision
is enabled by default).