Real-Time-Voice-Cloning
Real-Time-Voice-Cloning copied to clipboard
How to use single speaker trained model for voice cloning?
I have tried this link to create sample dataset of one speaker. From the dataset, performed encoding and training synthesizer and Vocoder model training. After the models have been trained I got model files inside saved_models/<experiment>
.
Then, I wanted to use the custom trained model instead of default model. So, tried this command python demo_toolbox.py -m ./saved_models/my_run/
. Instead of taking the custom models, it downloads default model again and inferencing based on the default model.
How to properly use trained model for producing inference? Any help would be appreciable. If you need more detailed, please let me know.