VoiceCraft
VoiceCraft copied to clipboard
Adding emotion tag to VoiceCraft
I'm interested in contributing to VoiceCraft by adding emotion control functionality. My goal is to enable the model to generate audio with a specified emotion while cloning a voice from a reference audio. I have a few questions about implementing this feature:
-
I'm considering using the Emotional Speech Dataset (ESD) for training. Is this dataset suitable, or would you recommend alternatives?
-
Should the loss function be modified to account for the emotion tag?
-
The paper mentions that training VoiceCraft took around 240 hours. Would implementing this new feature require a similar training time, or could it be done more efficiently?
As someone new to open-source contributions, I'd appreciate any guidance on how to proceed with this feature addition. Thank you for your help!
- the dataset sounds good
- this is something worth exploring
- would need exp to see. making it more efficient is a good research/engineering problem