Wen-Chin Huang (unilight)
Wen-Chin Huang (unilight)
@Ryu1845 Hi, can you tell me whether you used the pre-trained model, and which stages you executed?
@Ryu1845 I apologize for the bug. The error was due to some bug in the s3prl-vc package. I need some time to fix it and test it. Please wait patiently....
@Ryu1845 I apologize for replying so late. Can you try updating the `s3prl-vc` package by executing the following? ``` cd tools make s3prl_vc ``` And then see if you get...
Hi @leelee724 did you calculate the CER/WER? What were the numbers?
Hi @leelee724 which config file did you use? Was num_train set to 932? And did you find this problem in a lot of samples? Just to clarify, there are some...
Hmm, I am happy to collaborate, but I am not sure what the end goal of this project this thus not sure how I can help.
@bshall Thank you for the reply! I do find that segmenter works well on other speakers. I am just wondering how you found the mapping from cluster index to sonorants,...
Hi @Fbarrade, currently for the accent conversion recipes, we don't support pre-trained models thus it's not possible to do pure inference now. You can however train the model by yourself....
Hi @PAAYAS, can you try to follow the instructions in the readme here: https://github.com/unilight/seq2seq-vc/tree/main/egs/l2-arctic/lsc, and then see if you have any problem? If you only want to convert "from" a...
@PAAYAS The current methods (all three) cannot convert from a specific new speaker without re-training (or fine-tuning) using the data from that new speaker.