FragmentVC
FragmentVC copied to clipboard
Any-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
I use the fragmentvc.pt and vocoder.pt in the Releases, and then feed the VCTK data with sample rate 48000 to generate conversion result. But the phrase of generated result become...
Where is the difference between CHECKPOINT_PATH and WAV2VEC_PATH? Is the linked file "wav2vec_small.pt" the checkpoint-file or the wav2vec-file? And where can I find the remaining file, so that I can...
I have been trying to train Fragment VC model on my own dataset. It works fine with VCTK Dataset, but when I try it with my own dataset, I get...
Thanks for sharing the code and model. I'm trying to reproduce the SVAR results in the paper and find I can make the EER down to 1.57% with a threshold...
I am using pretrained XLSR_wav2vec2 model as a vocoder. I am successful in loading the model and extracting features from my dataset. However, I am facing issues in running training...
hello, when I use wave2vec pre-train model(wav2vec_small.pt) , I get the errror"AttributeError: 'Namespace' object has no attribute 'extractor_mode'", I have done something as following, but can't solve it, could you...
Hello, I want to know the pre-train model's training dataset in "Release" . Beacause I use vctk data get a good result , but aishell3 not.
How to run this code