w2v2-how-to icon indicating copy to clipboard operation
w2v2-how-to copied to clipboard

How to fine-tune the pretrain-model

Open Hyfred opened this issue 2 years ago • 3 comments

I was wondering how to fine-tune the released model in another dataset.

Hyfred avatar Aug 23 '22 07:08 Hyfred

For fine-tuning you can download the torch version of the model from https://huggingface.co/audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim.

We mention in the README that the torch model is published there:

image

but maybe we should highlight this more?

hagenw avatar Sep 08 '22 09:09 hagenw

Some details like the Adam optimizer hyperparameters are not given in the paper. Should we assume that you use the default Wav2Vec2 hyperparameters if not specified in the paper?

Vedaad-Shakib avatar Jan 09 '24 23:01 Vedaad-Shakib

Yes, if not mentioned otherwise we kept to the default parameters of TrainingArguments.

frankenjoe avatar Jan 10 '24 08:01 frankenjoe