w2v2-how-to
w2v2-how-to copied to clipboard
How to fine-tune the pretrain-model
I was wondering how to fine-tune the released model in another dataset.
For fine-tuning you can download the torch version of the model from https://huggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim.
We mention in the README that the torch model is published there:
but maybe we should highlight this more?
Some details like the Adam optimizer hyperparameters are not given in the paper. Should we assume that you use the default Wav2Vec2 hyperparameters if not specified in the paper?
Yes, if not mentioned otherwise we kept to the default parameters of TrainingArguments.