wyp19930313
wyp19930313
I read your paper. The upstream pre-training input is 224. There are 128 and 480 inputs for fineturn in downstream tasks. If I want to use your pre-trained model to...
散文模型数据集
你好,散文模型数据集能分享一下吗?多谢。
Hello, I use the pre-trained model you provided to perform voice conversion for Chinese. I checked the results and found that the non-linguistic information of the output file is not...
Hello, I use this code to train on the Chinese dataset. The loss on the training set is: AE:[425993/2000000], loss_rec=0.21, loss_kl=0.27, lambda=1.0e+0. And I find that the loss hardly decreases....
Hello, I have two questions while reading your code. Could you please help me answer them if you are free. 1. Why is the forward propagation of training and prediction...
(officially released model:https://github.com/facebookresearch/fairseq/tree/main/examples/hubert) What is the difference between the model you released and the officially released model? Or how is the model you released trained?