Speaker-independent-emotional-voice-conversion-based-on-conditional-VAW-GAN-and-CWT
Speaker-independent-emotional-voice-conversion-based-on-conditional-VAW-GAN-and-CWT copied to clipboard
This is the implementation of our Interspeech 2020 paper "Converting anyone's emotion: towards speaker-independent emotional voice conversion".
Hello and thank you for sharing your work! Could you please provide the pretrained model for inference? Thank you very much in advance!
Very cool work! You should add a license so people know how they may use it:)
Bumps [tensorflow-gpu](https://github.com/tensorflow-gpu/tensorflow-gpu) from 1.5.0 to 1.15.2. Commits See full diff in compare view [](https://help.github.com/articles/configuring-automated-security-fixes) Dependabot will resolve any conflicts with this PR as long as you don't alter...