Paayas P
Paayas P
If you could assist me with training the LSC or cascade model for a different source speaker, that would be greatly appreciated.
Hello @unilight I used an ASI speaker (**Source**) to train the LSC model, and I set the target to BDL. The voice of the TXHC speaker appeared in the wav...
[ASI_BDL_LSC.zip](https://github.com/user-attachments/files/15982166/ASI_BDL_LSC.zip) Here by I am providing some of the results which I obtained while decoding. It seems that the decoder which we are using is ppg_sxliu_decoder_THXC and for vocoder is...
@unilight I see now. Could you could assist me with how to fine-tune for a new speaker or retrain models?
@unilight Thank you, will look once again.
Greetings, @unilight. As you indicated in https://github.com/unilight/seq2seq-vc/tree/main/egs/l2-arctic, you are employing the [S3PRL-VC](https://github.com/unilight/s3prl-vc) toolbox for non-parallel frame-based VC model training. Could you please help me with my own dataset training?
Hello @unilight, thank you for your time, I was able to train the non-parallel frame-based VC model on my dataset, but the waveform produced while decoding seems not capturing speaker...
Greetings, @unilight Could you please provide me with instructions on how to convert an accent from multiple speakers to one target speaker?