hubert
hubert copied to clipboard
HuBERT content encoders for: A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion
Hi, thank you for your great work, it helps me a lot! I have a question about the Hubert discrete model. Have you done any kind of training for this...
in the train.py file, you have an argument named `--warmstart` to allow "initialize[ing] from the fairseq HuBERT checkpoint". I wonder which checkpoint is it since fairseq offers a lot of...
Hello there. I'm in the process of setting up a Colab notebook to train a couple models needed for inference in Soft-VC, mostly for personal ease-of-access. One of the first...
how to properly create length.json file?
Dear, thanks for your great work, can you point me how to rebuild HuBert-Discrete model?
what is the reason? the quality should be better...
Hi, thanks for sharing this great work. What should be the validation/training accuracy/loss to ensure a good enough training ?
my problem is as title, how to train hubert-soft content encoder using Mandarin Chinese dataset? still using the pretrained hubert model to extract Discrete Speech Units ? (encode dataset step)...
(officially released model:https://github.com/facebookresearch/fairseq/tree/main/examples/hubert) What is the difference between the model you released and the officially released model? Or how is the model you released trained?