One-Shot-Voice-Cloning icon indicating copy to clipboard operation
One-Shot-Voice-Cloning copied to clipboard

How to generate the duration statistical info, like test_wavs/*.npy file.

Open CMsmartvoice opened this issue 2 years ago • 1 comments

  1. The npy files in */test_wavs are generated by the MFA tool, but first its corresponding phoneme sequence has to be known.

  2. It is not limited to the above method, but any tool that can predict the duration of articulation can be used, such as the acoustic model of ASR.

  3. The above method can accurately estimate the duration information of the reference audio. For cloning, in fact, the accuracy of duration information is not so demanding, and the result of coarse estimation using manual methods can achieve the same effect. For example, using a speech spectrogram viewing tool, or other audio annotation tools, the duration of phonemes can be estimated audiovisually.

  4. The Style_Encoder in this model is equivalent to an audio frame encoder, where the final output of the network is related to the content only, with phoneme position information embedded in the results. Based on these temporal position encodings, a simple estimation of the phoneme duration of the reference audio can be performed using the Style_Encoder. Better yet, the Style_Encoder method does not require knowledge of the phoneme sequence corresponding to the audio. https://github.com/CMsmartvoice/One-Shot-Voice-Cloning/blob/6beec14888be82ade5164cc9e534f0a0c1ee38f9/TensorFlowTTS/tensorflow_tts/models/moduls/core.py#L700-L705

Originally posted by @CMsmartvoice in https://github.com/CMsmartvoice/One-Shot-Voice-Cloning/issues/3#issuecomment-1046414407

CMsmartvoice avatar Feb 23 '22 03:02 CMsmartvoice

希望多个训练教程

Chopin68 avatar Mar 03 '22 07:03 Chopin68