ernie icon indicating copy to clipboard operation
ernie copied to clipboard

Language model fine-tuning

Open braaannigan opened this issue 5 years ago • 0 comments
trafficstars

Nice work!

I often start out with much more unlabelled than labelled data. Is it possible to do masked language model fine-tuning (without the classification head) to start with on the full set of data before adding the classifier?

If not, would a second best approach be to do it iteratively i.e. train on the small amount of labelled data, predict for the unlabelled data, fine tune on the labels & predictions and then re-train just on the labelled data?

braaannigan avatar Mar 03 '20 10:03 braaannigan