v2 and emotion / precise voice control.
I got v2 to work by adapting the provided jupyter code into a python script ( https://github.com/myshell-ai/OpenVoice/blob/main/demo_part3.ipynb ).
However, while I see the paper/demos showing control of emotion, and talk about precise control of pauses etc, I see no example of how to accomplish this in practice.
I see one of the examples for v1 list speaker="whispering" as an option, but that's not for v2, and I don't know how to get a list of the possible/valid types of styles/emotions.
Is there some documentation I'm missing?
Thanks.
同问,How to simulate pauses in sound
Can I simulate style / pauses / excitement ?
Anyone on this?
同问,How to simulate pauses in sound
Use :::::: or :::.::: for long pause, or ,, for short. Dots, double dots and commas.