Thi Vũ
Thi Vũ
make sure your symbols list in `symbols.py` contains all the symbols in your dataset. if you want to train on a new language, you need to add new set of...
found a decent implementation with training code here [NoFish-528/encodec-pytorch: unofficial implementation of the High Fidelity Neural Audio Compression](https://github.com/NoFish-528/encodec-pytorch). hope it helps you guys.
did anyone find the solution?
@WoBuChiTang hi, i need to train this model on a long audio dataset (up to 20 second long), curious what's the max `max_num_tokens` you were able to pull off with...
hi, curious did you make it work?
@ZhangLei999 hi did you try retraining the discrete model? How is the result?
hi @jasonppy i have the same question, would love to hear your thoughts on this
isn't inference_tts_scale.py the way to do batch inference?
@rishikksh20 hi, i'm curious about the duration of the dataset you used for multilingual fine-tuning. i am currently fine-tuning the model with 450 hours vietnamese + 115 hours english dataset,...
@jasonppy hi, can you explain why you chose to retrain encodec instead of using the released model? is 8 codebooks too much?