vall-e
vall-e copied to clipboard
Training in different Language
Hello enhuiz! Thank you for your work on developing this code. I used your Colab-notebook and everything ran perfectly.
Do you think it is possible to train in another language, with transfer learning? Something like what happens in Wav2Vec for example, starting from the pre-trained checkpoint.
To train in another language, customize g2p.py for that language.
Fine-tuning another language on a pre-trained checkpoint may work, perhaps will consider it later.
Is the encodec model robust enough to generate quantised representations for audio in other languages?
I am trying to transfer this model to Chinese. Customizing token size or dropping the rare tokens are also needed for a new Language.
Hello enhuiz! Thank you for your work on developing this code. I used your Colab-notebook and everything ran perfectly.
Do you think it is possible to train in another language, with transfer learning? Something like what happens in Wav2Vec for example, starting from the pre-trained checkpoint.
try this https://github.com/skysbird/g2p-zh-en